33 resultados para Separation (Technology)
em Helda - Digital Repository of University of Helsinki
Resumo:
Miniaturization of analytical instrumentation is attracting growing interest in response to the explosive demand for rapid, yet sensitive analytical methods and low-cost, highly automated instruments for pharmaceutical and bioanalyses and environmental monitoring. Microfabrication technology in particular, has enabled fabrication of low-cost microdevices with a high degree of integrated functions, such as sample preparation, chemical reaction, separation, and detection, on a single microchip. These miniaturized total chemical analysis systems (microTAS or lab-on-a-chip) can also be arrayed for parallel analyses in order to accelerate the sample throughput. Other motivations include reduced sample consumption and waste production as well as increased speed of analysis. One of the most promising hyphenated techniques in analytical chemistry is the combination of a microfluidic separation chip and mass spectrometer (MS). In this work, the emerging polymer microfabrication techniques, ultraviolet lithography in particular, were exploited to develop a capillary electrophoresis (CE) separation chip which incorporates a monolithically integrated electrospray ionization (ESI) emitter for efficient coupling with MS. An epoxy photoresist SU-8 was adopted as structural material and characterized with respect to its physicochemical properties relevant to chip-based CE and ESI/MS, namely surface charge, surface interactions, heat transfer, and solvent compatibility. As a result, SU-8 was found to be a favorable material to substitute for the more commonly used glass and silicon in microfluidic applications. In addition, an infrared (IR) thermography was introduced as direct, non-intrusive method to examine the heat transfer and thermal gradients during microchip-CE. The IR data was validated through numerical modeling. The analytical performance of SU-8-based microchips was established for qualitative and quantitative CE-ESI/MS analysis of small drug compounds, peptides, and proteins. The CE separation efficiency was found to be similar to that of commercial glass microchips and conventional CE systems. Typical analysis times were only 30-90 s per sample indicating feasibility for high-throughput analysis. Moreover, a mass detection limit at the low-attomole level, as low as 10E+5 molecules, was achieved utilizing MS detection. The SU-8 microchips developed in this work could also be mass produced at low cost and with nearly identical performance from chip to chip. Until this work, the attempts to combine CE separation with ESI in a chip-based system, amenable to batch fabrication and capable of high, reproducible analytical performance, have not been successful. Thus, the CE-ESI chip developed in this work is a substantial step toward lab-on-a-chip technology.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.
Resumo:
This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.
Resumo:
There is a need for better understanding of the processes and new ideas to develop traditional pharmaceutical powder manufacturing procedures. Process analytical technology (PAT) has been developed to improve understanding of the processes and establish methods to monitor and control processes. The interest is in maintaining and even improving the whole manufacturing process and the final products at real-time. Process understanding can be a foundation for innovation and continuous improvement in pharmaceutical development and manufacturing. New methods are craved for to increase the quality and safety of the final products faster and more efficiently than ever before. The real-time process monitoring demands tools, which enable fast and noninvasive measurements with sufficient accuracy. Traditional quality control methods have been laborious and time consuming and they are performed off line i.e. the analysis has been removed from process area. Vibrational spectroscopic methods are responding this challenge and their utilisation have increased a lot during the past few years. In addition, other methods such as colour analysis can be utilised in noninvasive real-time process monitoring. In this study three pharmaceutical processes were investigated: drying, mixing and tabletting. In addition tablet properties were evaluated. Real-time monitoring was performed with NIR and Raman spectroscopies, colour analysis, particle size analysis and compression data during tabletting was evaluated using mathematical modelling. These methods were suitable for real-time monitoring of pharmaceutical unit operations and increase the knowledge of the critical parameters in the processes and the phenomena occurring during operations. They can improve our process understanding and therefore, finally, enhance the quality of final products.
Resumo:
Strategies of scientific, question-driven inquiry are stated to be important cultural practices that should be educated in schools and universities. The present study focuses on investigating multiple efforts to implement a model of Progressive Inquiry and related Web-based tools in primary, secondary and university level education, to develop guidelines for educators in promoting students collaborative inquiry practices with technology. The research consists of four studies. In Study I, the aims were to investigate how a human tutor contributed to the university students collaborative inquiry process through virtual forums, and how the influence of the tutoring activities is demonstrated in the students inquiry discourse. Study II examined an effort to implement technology-enhanced progressive inquiry as a distance working project in a middle school context. Study III examined multiple teachers' methods of organizing progressive inquiry projects in primary and secondary classrooms through a generic analysis framework. In Study IV, a design-based research effort consisting of four consecutive university courses, applying progressive inquiry pedagogy, was retrospectively re-analyzed in order to develop the generic design framework. The results indicate that appropriate teacher support for students collaborative inquiry efforts appears to include interplay between spontaneity and structure. Careful consideration should be given to content mastery, critical working strategies or essential knowledge practices that the inquiry approach is intended to promote. In particular, those elements in students activities should be structured and directed, which are central to the aim of Progressive Inquiry, but which the students do not recognize or demonstrate spontaneously, and which are usually not taken into account in existing pedagogical methods or educational conventions. Such elements are, e.g., productive co-construction activities; sustained engagement in improving produced ideas and explanations; critical reflection of the adopted inquiry practices, and sophisticated use of modern technology for knowledge work. Concerning the scaling-up of inquiry pedagogy, it was concluded that one individual teacher can also apply the principles of Progressive Inquiry in his or her own teaching in many innovative ways, even under various institutional constraints. The developed Pedagogical Infrastructure Framework enabled recognizing and examining some central features and their interplay in the designs of examined inquiry units. The framework may help to recognize and critically evaluate the invisible learning-cultural conventions in various educational settings and can mediate discussions about how to overcome or change them.
Resumo:
The basic goal of a proteomic microchip is to achieve efficient and sensitive high throughput protein analyses, automatically carrying out several measurements in parallel. A protein microchip would either detect a single protein or a large set of proteins for diagnostic purposes, basic proteome or functional analysis. Such analyses would include e.g. interactomics, general protein expression studies, detecting structural alterations or secondary modifications. Visualization of the results may occur by simple immunoreactions, general or specific labelling, or mass spectrometry. For this purpose we have manufactured chip-based proteome analysis devices that utilize the classical polymer gel electrophoresis technology to run one and two-dimensional gel electrophoresis separations of proteins in just a smaller size. In total, we manufactured three functional prototypes of which one performed a miniaturized one-dimensional gel electrophoresis (1-DE) separation, the second and third preformed two-dimensional gel electrophoresis (2-DE) separations. These microchips were successfully used to separate and characterize a set of predefined standard proteins, cell and tissue samples. Also, the miniaturized 2-DE (ComPress-2DE) chip presents a novel way of combining the 1st and 2nd dimensional separations, thus avoiding manual handling of the gels, eliminate cross-contamination, and make analyses faster and repeatability better. They all showed the advantages of miniaturization over the commercial devices; such as fast analysis, low sample- and reagent consumption, high sensitivity, high repeatability and inexpensive performance. All these instruments have the potential to be fully automated due to their easy-to-use set-up.
Resumo:
The present study investigated the potato starches and polyols which were used to prepare edible films. The amylose content and the gelatinization properties of various potato starches extracted from different potato cultivars were determined. The amylose content of potato starches varied between 11.9 and 20.1%. Onset temperatures of gelatinization of potato starches in excess water varied independently of the amylose content from 58 to 61°C determined using differential scanning calorimetry (DSC). The crystallinity of selected native starches with low, medium and high amylose content was determined by X-ray diffraction. The relative crystallinity was found to be around 10 13% in selected native potato starches containing 13 17% water. The glass transition temperature, crystallization melting behavior and relaxations of polyols, erythritol, sorbitol and xylitol, were determined using (DSC), dielectric analysis (DEA) and dynamic mechanical analysis (DMA). The glass transition temperatures of xylitol and sorbitol decreased as a result of water plasticization. Anhydrous amorphous erythritol crystallized rapidly. Edible films were obtained from solutions containing gelatinized starch, plasticizer (polyol or binary polyol mixture) and water by casting and evaporating water at 35°C. The present study investigated effects of plasticizer type and content on physical and mechanical properties of edible films stored at various relative water vapor pressures (RVP). The crystallinity of edible films with low, medium and high amylose content was determined by X-ray diffraction and they were found to be practically amorphous. Water sorption and water vapor permeability (WVP) of films was affected by the type and content of plasticizer. Water vapor permeability of films increased with increasing plasticizer content and storage RVP. Generally, Young's modulus and tensile strength decreased with increasing plasticizer and water content with a concurrent increase in elongation at break of films. High contents of xylitol and sorbitol resulted in changes in physical and mechanical properties of films probably due to phase separation and crystallization of xylitol and sorbitol which was not observed when binary polyol mixtures were used as plasticizers. The mechanical properties and the water vapor permeability (WVP) of the films were found to be independent of the amylose content.
Resumo:
Milk microfiltration (0.05-0.2 um) is a membrane separation technique which divides milk components into casein-enriched and native whey fractions. Hitherto the effect of intensive microfiltration including a diafiltration step for both cheese and whey processing has not been studied. The microfiltration performance of skimmed milk was studied with polymeric and ceramic MF membranes. The changes caused by decreased concentration of milk lactose, whey protein and ash content for cheese milk quality and ripening were studied. The effects of cheese milk modification on the milk coagulation properties, cheese recovery yield, cheese composition, ripening and sensory quality as well as on the whey recovery yield and composition by microfiltration were studied. The functional properties of whey protein concentrate from native whey were studied and the detailed composition of whey protein concentrate powders made from cheese wheys after cheese milk pretreatments such as high temperature heat treatment (HH), microfiltration (MF) and ultrafiltration (UF) were compared. The studied polymeric spiral wound microfiltration membranes had 38.5% lower energy consumption, 30.1% higher retention of whey proteins to milk retentate and 81.9% lower permeate flux values compared to ceramic membranes. All studied microfiltration membranes were able to separate main whey proteins from skimmed milk. The optimal lactose content of Emmental cheese milk exceeded 3.2% and reduction of whey proteins and ash content of cheese milk with high concentration factor (CF) values increased the rate of cheese ripening. Reduction of whey protein content in cheese milk increased the concentration of caseinomacropeptide (CMP) of total proteins in cheese whey. Reduction of milk whey protein, lactose and ash content reduces milk rennet clotting time and increased the firmness of the coagulum. Cheese yield calculated from raw milk to cheese was lower with microfiltrated milks due to native whey production. Amounts of a-lactalbumin (a-LA) and b-lactoglobulin (b-LG) were significantly higher in the reference whey, indicating that HH, MF and UF milk pretreatments decrease the amounts of these valuable whey proteins in whey. Even low CF values in milk microfiltration (CF 1.4) reduced nutritional value of cheese whey. From the point of view of utilization of milk components it would be beneficial if the amount of native whey and the CMP content of cheese whey could be maximized. Whey protein concentrate powders made of native whey had excellent functional properties and their detailed amino acid composition differed from those of cheese whey protein concentrate powders.
Resumo:
Mannans are abundant plant polysaccharides found in the endosperm of certain leguminous seeds (guar gum galactomannan, GG; locust bean gum galactomannan, LBG), in the tuber of the konjac plant (konjac glucomannan, KGM), and in softwoods (galactoglucomannan, GGM). This study focused on the effects of the chemical structure of mannans on their film-forming and emulsion-stabilizing properties. Special focus was on spruce GGM, which is an interesting new product from forest biorefineries. A plasticizer was needed for the formation of films from mannans other than KGM and the optimal proportion was 40% (w/w of polymers) glycerol or sorbitol. Galactomannans with lower galactose content (LBG, modified GG) produced films with higher elongation at break and tensile strength. The mechanical properties of GG-based films were improved by decreasing the degree of polymerization of the polysaccharide with moderate mannanase treatments. The improvement of mechanical properties of GGM-based films was sought by blending GGM with each of poly(vinyl alcohol) (PVOH), corn arabinoxylan (cAX), and KGM. Adding other polymers increased the elongation at break of GGM blend films. The tensile strength of films increased with increasing amounts of PVOH and KGM, but the effect of cAX was the opposite. Dynamic mechanical analysis showed two separate loss modulus peaks for blends of GGM and PVOH, but a single peak for all other films. Optical and scanning electron microscopy confirmed good miscibility of GGM with cAX and KGM. In contrast, films blended from GGM and PVOH showed phase separation. GGM and KGM were mixed with cellulose nanowhiskers (CNW) to form composite films. Addition of CNW to KGM-based films induced the formation of fiberlike structures with lengths of several millimeters. In GGM-based films, rodlike structures with lengths of tens of micrometers were formed. Interestingly, the notable differences in the film structure did not appear to be related to the mechanical and thermal properties of the films. Permeability properties of GGM-based films were compared to those of films from commercial mannans KGM, GG, and LBG. GGM-based films had the lowest water vapor permeability when compared to films from other mannans. The oxygen permeability of GGM films was of the same magnitude as that of commercial polyethylene / ethylene vinyl alcohol / polyethylene laminate film. The aroma permeability of GGM films was low. All films were transparent in the visible region, but GGM films blocked the light transmission in the ultraviolet region of the spectra. The stabilizing effect of GGM on a model beverage emulsion system was studied and compared to that of GG, LBG, KGM, and cAX. In addition, GG was enzymatically modified in order to examine the effect of the degree of polymerization and the degree of substitution of galactomannans on emulsion stability. Use of GGM increased the turbidity of emulsions both immediately after preparation and after storage of up to 14 days at room temperature. GGM emulsions had higher turbidity than the emulsions containing other mannans. Increasing the storage temperature to +45 ºC led to rapid emulsion breakdown, but a decrease in storage temperature increased emulsion stability after 14 days. A low degree of polymerization and a high degree of substitution of the modified galactomannans were associated with a decrease in emulsion turbidity.
Resumo:
Due to the recent development in CCD technology aerial photography is now slowly changing from film to digital cameras. This new aspect in remote sensing allows and requires also new automated analysis methods. Basic research on reflectance properties of natural targets is needed so that computerized processes could be fully utilized. For this reason an instrument was developed at Finnish Geodetic Institute for measurement of multiangular reflectance of small remote sensing targets e.g. forest understorey or asphalt. Finnish Geodetic Institute Field Goniospectrometer (FiGIFiGo) is a portable device that is operated by 1 or 2 persons. It can be reassembled to a new location in 15 minutes and after that a target's multiangular reflectance can be measured in 10 - 30 minutes (with one illumination angle). FiGIFiGo has effective spectral range approximately from 400 nm to 2000 nm. The measurements can be made either outside with sunlight or in laboratory with 1000 W QTH light source. In this thesis FiGIFiGo is introduced and the theoretical basis of such reflectance measurements are discussed. A new method is introduced for extraction of subcomponent proportions from reflectance of a mixture sample, e.g. for retrieving proportion of lingonberry's reflectance in observation of lingonberry-lichen sample. This method was tested by conducting a series of measurements on reflectance properties of artificial samples. The component separation method yielded sound results and brought up interesting aspects in targets' reflectances. The method and the results still need to be verified with further studies, but the preliminary results imply that this method could be a valuable tool in analysis of such mixture samples.
Resumo:
In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylä (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylä, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium.
Resumo:
Wireless network access is gaining increased heterogeneity in terms of the types of IP capable access technologies. The access network heterogeneity is an outcome of incremental and evolutionary approach of building new infrastructure. The recent success of multi-radio terminals drives both building a new infrastructure and implicit deployment of heterogeneous access networks. Typically there is no economical reason to replace the existing infrastructure when building a new one. The gradual migration phase usually takes several years. IP-based mobility across different access networks may involve both horizontal and vertical handovers. Depending on the networking environment, the mobile terminal may be attached to the network through multiple access technologies. Consequently, the terminal may send and receive packets through multiple networks simultaneously. This dissertation addresses the introduction of IP Mobility paradigm into the existing mobile operator network infrastructure that have not originally been designed for multi-access and IP Mobility. We propose a model for the future wireless networking and roaming architecture that does not require revolutionary technology changes and can be deployed without unnecessary complexity. The model proposes a clear separation of operator roles: (i) access operator, (ii) service operator, and (iii) inter-connection and roaming provider. The separation allows each type of an operator to have their own development path and business models without artificial bindings with each other. We also propose minimum requirements for the new model. We present the state of the art of IP Mobility. We also present results of standardization efforts in IP-based wireless architectures. Finally, we present experimentation results of IP-level mobility in various wireless operator deployments.
Resumo:
Transposons are mobile elements of genetic material that are able to move in the genomes of their host organisms using a special form of recombination called transposition. Bacteriophage Mu was the first transposon for which a cell-free in vitro transposition reaction was developed. Subsequently, the reaction has been refined and the minimal Mu in vitro reaction is useful in the generation of comprehensive libraries of mutant DNA molecules that can be used in a variety of applications. To date, the functional genetics applications of Mu in vitro technology have been subjected to either plasmids or genomic regions and entire genomes of viruses cloned on specific vectors. This study expands the use of Mu in vitro transposition in functional genetics and genomics by describing novel methods applicable to the targeted transgenesis of mouse and the whole-genome analysis of bacteriophages. The methods described here are rapid, efficient, and easily applicable to a wide variety of organisms, demonstrating the potential of the Mu transposition technology in the functional analysis of genes and genomes. First, an easy-to-use, rapid strategy to generate construct for the targeted mutagenesis of mouse genes was developed. To test the strategy, a gene encoding a neuronal K+/Cl- cotransporter was mutagenised. After a highly efficient transpositional mutagenesis, the gene fragments mutagenised were cloned into a vector backbone and transferred into bacterial cells. These constructs were screened with PCR using an effective 3D matrix system. In addition to traditional knock-out constructs, the method developed yields hypomorphic alleles that lead into reduced expression of the target gene in transgenic mice and have since been used in a follow-up study. Moreover, a scheme is devised to rapidly produce conditional alleles from the constructs produced. Next, an efficient strategy for the whole-genome analysis of bacteriophages was developed based on the transpositional mutagenesis of uncloned, infective virus genomes and their subsequent transfer into susceptible host cells. Mutant viruses able to produce viable progeny were collected and their transposon integration sites determined to map genomic regions nonessential to the viral life cycle. This method, applied here to three very different bacteriophages, PRD1, ΦYeO3 12, and PM2, does not require the target genome to be cloned and is directly applicable to all DNA and RNA viruses that have infective genomes. The method developed yielded valuable novel information on the three bacteriophages studied and whole-genome data can be complemented with concomitant studies on individual genes. Moreover, end-modified transposons constructed for this study can be used to manipulate genomes devoid of suitable restriction sites.
Resumo:
Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.