16 resultados para extraction and separation techniques

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Separation of carboxylic acids from aqueous streams is an important part of their manufacturing process. The aqueous solutions are usually dilute containing less than 10 % acids. Separation by distillation is difficult as the boiling points of acids are only marginally higher than that of water. Because of this distillation is not only difficult but also expensive due to the evaporation of large amounts of water. Carboxylic acids have traditionally been precipitated as calcium salts. The yields of these processes are usually relatively low and the chemical costs high. Especially the decomposition of calcium salts with sulfuric acid produces large amounts of calcium sulfate sludge. Solvent extraction has been studied as an alternative method for recovery of carboxylic acids. Solvent extraction is based on mixing of two immiscible liquids and the transfer of the wanted components form one liquid to another due to equilibrium difference. In the case of carboxylic acids, the acids are transferred from aqueous phase to organic solvent due to physical and chemical interactions. The acids and the extractant form complexes which are soluble in the organic phase. The extraction efficiency is affected by many factors, for instance initial acid concentration, type and concentration of the extractant, pH, temperature and extraction time. In this paper, the effects of initial acid concentration, type of extractant and temperature on extraction efficiency were studied. As carboxylic acids are usually the products of the processes, they are wanted to be recovered. Hence the acids have to be removed from the organic phase after the extraction. The removal of acids from the organic phase also regenerates the extractant which can be then recycled in the process. The regeneration of the extractant was studied by back-extracting i.e. stripping the acids form the organic solution into diluent sodium hydroxide solution. In the solvent regeneration, the regenerability of different extractants and the effect of initial acid concentration and temperature were studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Selostus: Suomalaisen kauran seleenipitoisuus vuosina 1997-1999

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective in this Master’s Thesis was to determine VOC emissions from veneer drying in softwood plywood manufacturing. Emissions from plywood industry have become an important factor because of the tightened regulations worldwide. In this Thesis is researched quality and quantity of the VOCs from softwood veneer drying. One of the main objectives was to find out suitable cleaning techniques for softwood VOC emissions. In introduction part is presented veneer drying machines, wood mechanical and chemical properties. VOC control techniques and specified VOC limits are introduced also in the introduction part. Plywood mills have not had interest to VOC emissions previously nevertheless nowadays plywood mills worldwide must consider reduction of the emissions. This Thesis includes measuring of emissions from softwood veneer dryer, analyzation of measured test results and reviewing results. Different air conditions inside of the dryer were considered during planning of the measurements. Results of the emissions measurements were compared to the established laws. Results from this Thesis were softwood veneer dryer emissions in different air conditions. Emission control techniques were also studied for softwood veneer dryer emissions for further specific research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of intensity-modulated radiotherapy (IMRT) has increased extensively in the modern radiotherapy (RT) treatments over the past two decades. Radiation dose distributions can be delivered with higher conformality with IMRT when compared to the conventional 3D-conformal radiotherapy (3D-CRT). Higher conformality and target coverage increases the probability of tumour control and decreases the normal tissue complications. The primary goal of this work is to improve and evaluate the accuracy, efficiency and delivery techniques of RT treatments by using IMRT. This study evaluated the dosimetric limitations and possibilities of IMRT in small (treatments of head-and-neck, prostate and lung cancer) and large volumes (primitive neuroectodermal tumours). The dose coverage of target volumes and the sparing of critical organs were increased with IMRT when compared to 3D-CRT. The developed split field IMRT technique was found to be safe and accurate method in craniospinal irradiations. By using IMRT in simultaneous integrated boosting of biologically defined target volumes of localized prostate cancer high doses were achievable with only small increase in the treatment complexity. Biological plan optimization increased the probability of uncomplicated control on average by 28% when compared to standard IMRT delivery. Unfortunately IMRT carries also some drawbacks. In IMRT the beam modulation is realized by splitting a large radiation field to small apertures. The smaller the beam apertures are the larger the rebuild-up and rebuild-down effects are at the tissue interfaces. The limitations to use IMRT with small apertures in the treatments of small lung tumours were investigated with dosimetric film measurements. The results confirmed that the peripheral doses of the small lung tumours were decreased as the effective field size was decreased. The studied calculation algorithms were not able to model the dose deficiency of the tumours accurately. The use of small sliding window apertures of 2 mm and 4 mm decreased the tumour peripheral dose by 6% when compared to 3D-CRT treatment plan. A direct aperture based optimization (DABO) technique was examined as a solution to decrease the treatment complexity. The DABO IMRT technique was able to achieve treatment plans equivalent with the conventional IMRT fluence based optimization techniques in the concave head-and-neck target volumes. With DABO the effective field sizes were increased and the number of MUs was reduced with a factor of two. The optimality of a treatment plan and the therapeutic ratio can be further enhanced by using dose painting based on regional radiosensitivities imaged with functional imaging methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooling crystallization is one of the most important purification and separation techniques in the chemical and pharmaceutical industry. The product of the cooling crystallization process is always a suspension that contains both the mother liquor and the product crystals, and therefore the first process step following crystallization is usually solid-liquid separation. The properties of the produced crystals, such as their size and shape, can be affected by modifying the conditions during the crystallization process. The filtration characteristics of solid/liquid suspensions, on the other hand, are strongly influenced by the particle properties, as well as the properties of the liquid phase. It is thus obvious that the effect of the changes made to the crystallization parameters can also be seen in the course of the filtration process. Although the relationship between crystallization and filtration is widely recognized, the number of publications where these unit operations have been considered in the same context seems to be surprisingly small. This thesis explores the influence of different crystallization parameters in an unseeded batch cooling crystallization process on the external appearance of the product crystals and on the pressure filtration characteristics of the obtained product suspensions. Crystallization experiments are performed by crystallizing sulphathiazole (C9H9N3O2S2), which is a wellknown antibiotic agent, from different mixtures of water and n-propanol in an unseeded batch crystallizer. The different crystallization parameters that are studied are the composition of the solvent, the cooling rate during the crystallization experiments carried out by using a constant cooling rate throughout the whole batch, the cooling profile, as well as the mixing intensity during the batch. The obtained crystals are characterized by using an automated image analyzer and the crystals are separated from the solvent through constant pressure batch filtration experiments. Separation characteristics of the suspensions are described by means of average specific cake resistance and average filter cake porosity, and the compressibilities of the cakes are also determined. The results show that fairly large differences can be observed between the size and shape of the crystals, and it is also shown experimentally that the changes in the crystal size and shape have a direct impact on the pressure filtration characteristics of the crystal suspensions. The experimental results are utilized to create a procedure that can be used for estimating the filtration characteristics of solid-liquid suspensions according to the particle size and shape data obtained by image analysis. Multilinear partial least squares regression (N-PLS) models are created between the filtration parameters and the particle size and shape data, and the results presented in this thesis show that relatively obvious correlations can be detected with the obtained models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Post-testicular sperm maturation occurs in the epididymis. The ion concentration and proteins secreted into the epididymal lumen, together with testicular factors, are believed to be responsible for the maturation of spermatozoa. Disruption of the maturation of spermatozoa in the epididymis provides a promising strategy for generating a male contraceptive. However, little is known about the proteins involved. For drug development, it is also essential to have tools to study the function of these proteins in vitro. One approach for screening novel targets is to study the secretory products of the epididymis or the G protein-coupled receptors (GPCRs) that are involved in the maturation process of the spermatozoa. The modified Ca2+ imaging technique to monitor release from PC12 pheochromocytoma cells can also be applied to monitor secretory products involved in the maturational processes of spermatozoa. PC12 pheochromocytoma cells were chosen for evaluation of this technique as they release catecholamines from their cell body, thus behaving like endocrine secretory cells. The results of the study demonstrate that depolarisation of nerve growth factor -differentiated PC12 cells releases factors which activate nearby randomly distributed HEL erythroleukemia cells. Thus, during the release process, the ligands reach concentrations high enough to activate receptors even in cells some distance from the release site. This suggests that communication between randomly dispersed cells is possible even if the actual quantities of transmitter released are extremely small. The development of a novel method to analyse GPCR-dependent Ca2+ signalling in living slices of mouse caput epididymis is an additional tool for screening for drug targets. By this technique it was possible to analyse functional GPCRs in the epithelial cells of the ductus epididymis. The results revealed that, both P2X- and P2Y-type purinergic receptors are responsible for the rapid and transient Ca2+ signal detected in the epithelial cells of caput epididymides. Immunohistochemical and reverse transcriptase-polymerase chain reaction (RTPCR) analyses showed the expression of at least P2X1, P2X2, P2X4 and P2X7, and P2Y1 and P2Y2 receptors in the epididymis. Searching for epididymis-specific promoters for transgene delivery into the epididymis is of key importance for the development of specific models for drug development. We used EGFP as the reporter gene to identify proper promoters to deliver transgenes into the epithelial cells of the mouse epididymis in vivo. Our results revealed that the 5.0 kb murine Glutathione peroxidase 5 (GPX5) promoter can be used to target transgene expression into the epididymis while the 3.8 kb Cysteine-rich secretory protein-1 (CRISP-1) promoter can be used to target transgene expression into the testis. Although the visualisation of EGFP in living cells in culture usually poses few problems, the detection of EGFP in tissue sections can be more difficult because soluble EGFP molecules can be lost if the cell membrane is damaged by freezing, sectioning, or permeabilisation. Furthermore, the fluorescence of EGFP is dependent on its conformation. Therefore, fixation protocols that immobilise EGFP may also destroy its usefulness as a fluorescent reporter. We therefore developed a novel tissue preparation and preservation techniques for EGFP. In addition, fluorescence spectrophotometry with epididymal epithelial cells in suspension revealed the expression of functional purinergic, adrenergic, cholinergic and bradykinin receptors in these cell lines (mE-Cap27 and mE-Cap28). In conclusion, we developed new tools for studying the role of the epididymis in sperm maturation. We developed a new technique to analyse GPCR dependent Ca2+ signalling in living slices of mouse caput epididymis. In addition, we improved the method of detecting reporter gene expression. Furthermore, we characterised two epididymis-specific gene promoters, analysed the expression of GPCRs in epididymal epithelial cells and developed a novel technique for measurement of secretion from cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hen eggs and oats (Avena Sativa) are important materials for the food industry. Today, instead of merely satisfying the feeling of hunger, consumers are asking for healthier, biologically active and environmentally friendly products. The growing awareness of consumers’ increasing demands presents a great challenge to the food industry to develop more sustainable products and utilise modern and effective techniques. The modification of yolk fatty acid composition by means of feed supplements is well understood. Egg yolk phospholipids are polar lipids and are used in several applications including food, cosmetics, pharmaceuticals, and special nutrients. Egg yolk phospholipids are excellent emulsifiers, typically sold as mixtures of phospholipids, triacylglycerols, and cholesterol. However, highly purified and characterised phospholipids are needed in several sophisticated applications. Industrial fractionation of phospholipids is usually based on organic solvents. With these fractionation techniques, some harmful residues of organic solvents may cause problems in further processing. The objective of the present study was to investigate the methods to improve the functional properties of eggs, to develop techniques to isolate the fractions responsible for the specific functional properties of egg yolk lipids, and to apply the developed techniques to plant-based materials, too. Fractionation techniques based on supercritical fluids were utilised for the separation of the lipid fractions of eggs and oats. The chemical and functional characterisation of the fractions were performed, and the produced oat polar lipid fractions were tested as protective barrier in encapsulation processes. Modifying the fatty acid compositions of egg yolks with different types of oil supplements in feed had no affect on their functional or sensory properties. Based on the results of functional and sensory analysis, it is evident that eggs with modified fatty acid compositions are usable in several industrial applications. These applications include liquid egg yolk products used in mayonnaise and salad dressings. Egg yolk powders were utilised in different kinds of fractionation processes. The precipitation method developed in this study resembles the supercritical anti-solvent method, which is typically used in the pharmaceutical industry. With pilot scale supercritical fluid processes, non-polar lipids and polar lipids were successfully separated from commercially produced egg yolk powder and oat flakes. The egg and oat-based polar lipid fractions showed high purities, and the corresponding delipidated fractions produced using supercritical techniques offer interesting starting materials for the further production of bioactive compounds. The oat polar lipid fraction contained especially digalactosyadiacylglycerol, which was shown to have valuable functional properties in the encapsulation of probiotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Utilization of biomass-based raw materials for the production of chemicals and materials is gaining an increasing interest. Due to the complex nature of biomass, a major challenge in its refining is the development of efficient fractionation and purification processes. Preparative chromatography and membrane filtration are selective, energy-efficient separation techniques which offer a great potential for biorefinery applications. Both of these techniques have been widely studied. On the other hand, only few process concepts that combine the two methods have been presented in the literature. The aim of this thesis was to find the possible synergetic effects provided by combining chromatographic and membrane separations, with a particular interest in biorefinery separation processes. Such knowledge could be used in the development of new, more efficient separation processes for isolating valuable compounds from complex feed solutions that are typical for the biorefinery environment. Separation techniques can be combined in various ways, from simple sequential coupling arrangements to fully-integrated hybrid processes. In this work, different types of combined separation processes as well as conventional chromatographic separation processes were studied for separating small molecules such as sugars and acids from biomass hydrolysates and spent pulping liquors. The combination of chromatographic and membrane separation was found capable of recovering high-purity products from complex solutions. For example, hydroxy acids of black liquor were successfully recovered using a novel multistep process based on ultrafiltration and size-exclusion chromatography. Unlike any other separation process earlier suggested for this challenging separation task, the new process concept does not require acidification pretreatment, and thus it could be more readily integrated into a pulp-mill biorefinery. In addition to the combined separation processes, steady-state recycling chromatography, which has earlier been studied for small-scale separations of high-value compounds only, was found a promising process alternative for biorefinery applications. In comparison to conventional batch chromatography, recycling chromatography provided higher product purity, increased the production rate and reduced the chemical consumption in the separation of monosaccharides from biomass hydrolysates. In addition, a significant further improvement in the process performance was obtained when a membrane filtration unit was integrated with recycling chromatography. In the light of the results of this work, separation processes based on combining membrane and chromatographic separations could be effectively applied for different biorefinery applications. The main challenge remains in the development of inexpensive separation materials which are resistant towards harsh process conditions and fouling.