18 resultados para Level of processing
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Sadannan vaikutus vedenpinnan tasoon kohosuolla
Resumo:
Problem of modeling of anaesthesia depth level is studied in this Master Thesis. It applies analysis of EEG signals with nonlinear dynamics theory and further classification of obtained values. The main stages of this study are the following: data preprocessing; calculation of optimal embedding parameters for phase space reconstruction; obtaining reconstructed phase portraits of each EEG signal; formation of the feature set to characterise obtained phase portraits; classification of four different anaesthesia levels basing on previously estimated features. Classification was performed with: Linear and quadratic Discriminant Analysis, k Nearest Neighbours method and online clustering. In addition, this work provides overview of existing approaches to anaesthesia depth monitoring, description of basic concepts of nonlinear dynamics theory used in this Master Thesis and comparative analysis of several different classification methods.
Resumo:
Työn päätavoitteena on kartoittaa Venäjän elintarviketeollisuutta ulkomaisen investoijan näkökulmasta. Tutkimus arvioi liiketoimintamahdollisuuksia ja kilpailutilannetta Venäjän elintarviketeollisuudessa ja auttaa ulkomaisia yrityksiä toteuttamaan liiketoimintastrategioitaan Venäjällä. Venäjän ja muiden siirtymätalousmaiden markkinatilannevertailujen lisäksi Venäjän alueita verrataan keskenään. Myös mahdollisen WTO jäsenyyden vaikutuksia arvioidaan. Kommunismin perintö vaikuttaa edelleen Venäjän elintarviketeollisuuteen ja maatalouteen. Maatalouden tuottavuus on kaukana länsimaisesta tasosta ja maatiloilta puuttuu rahoitusta. Etenkin maidon- ja lihanjalostajat kärsivät raaka-ainepulasta. Venäjän kriisi vuonna 1998 vahvisti paikallista teollisuustuotantoa mutta aiheutti ongelmia ulkomaisille investoijille ja yrityksille, jotka vievät tuotteitaan Venäjälle. Edut, joita mahdollinen maailmankauppajärjestö WTO:n jäsenyys tuo, ovat merkittävämpiä Venäjälle kuin sen kauppakumppaneille. Venäjän alueet eivät ole yhtäläisesti kehittyneitä ja kuluttajien ostovoima vaihtelee paljon. Itsestään selvin ja houkuttelevin vaihtoehto menestyvien elintarvikeyritysten laajentumiselle löytyy alueilta, joilla ostovoima on suurin. Tähän asti kansainväliset elintarvikeyritykset ovat olleet enemmän kiinnostuneita Itä- ja Keski-Euroopan maista. Käytettävissä olevat tulot ovat Itä- ja Keski-Euroopan maissa suurempia kuin Venäjällä, joten tuottajat pystyvät myymään myös kalliimpia tuotteita. Työvoimakustannukset Venäjällä tulevat olemaan suotuisia vielä muutaman vuosikymmenen ja markkinoiden koko on merkittävä. Siksi kansainvälisillä elintarvikeyrityksillä riittää kiinnostusta tulevaisuudessa investoida myös Venäjälle.
Resumo:
Operatiivisen tiedon tuottaminen loppukäyttäjille analyyttistä tarkastelua silmällä pitäen aiheuttaa ongelmia useille yrityksille. Diplomityö pyrkii ratkaisemaan ko. ongelman Teleste Oyj:ssä. Työ on jaettu kolmeen pääkappaleeseen. Kappale 2 selkiyttää On-Line Analytical Processing (OLAP)- käsitteen. Kappale 3 esittelee muutamia OLAP-tuotteiden valmistajia ja heidän arkkitehtuurejaan sekä tyypillisten sovellusalueiden lisäksi huomioon otettavia asioita OLAP käyttöönoton yhteydessä. Kappale 4, tuo esille varsinaisen ratkaisun. Teknisellä arkkitehtuurilla on merkittävä asema ratkaisun rakenteen kannalta. Tässä on sovellettu Microsoft:n tietovarasto kehysrakennetta. Kappaleen 4 edetessä, tapahtumakäsittelytieto muutetaan informaatioksi ja edelleen loppukäyttäjien tiedoksi. Loppukäyttäjät varustetaan tehokkaalla ja tosiaikaisella analysointityökalulla moniulotteisessa ympäristössä. Vaikka kiertonopeus otetaan työssä sovellusesimerkiksi, työ ei pyri löytämään optimaalista tasoa Telesten varastoille. Siitä huolimatta eräitä parannusehdotuksia mainitaan.
Resumo:
Knowledge of the behaviour of cellulose, hemicelluloses, and lignin during wood and pulp processing is essential for understanding and controlling the processes. Determination of monosaccharide composition gives information about the structural polysaccharide composition of wood material and helps when determining the quality of fibrous products. In addition, monitoring of the acidic degradation products gives information of the extent of degradation of lignin and polysaccharides. This work describes two capillary electrophoretic methods developed for the analysis of monosaccharides and for the determination of aliphatic carboxylic acids from alkaline oxidation solutions of lignin and wood. Capillary electrophoresis (CE), in its many variants is an alternative separation technique to chromatographic methods. In capillary zone electrophoresis (CZE) the fused silica capillary is filled with an electrolyte solution. An applied voltage generates a field across the capillary. The movement of the ions under electric field is based on the charge and hydrodynamic radius of ions. Carbohydrates contain hydroxyl groups that are ionised only in strongly alkaline conditions. After ionisation, the structures are suitable for electrophoretic analysis and identification through either indirect UV detection or electrochemical detection. The current work presents a new capillary zone electrophoretic method, relying on in-capillary reaction and direct UV detection at the wavelength of 270 nm. The method has been used for the simultaneous separation of neutral carbohydrates, including mono- and disaccharides and sugar alcohols. The in-capillary reaction produces negatively charged and UV-absorbing compounds. The optimised method was applied to real samples. The methodology is fast since no other sample preparation, except dilution, is required. A new method for aliphatic carboxylic acids in highly alkaline process liquids was developed. The goal was to develop a method for the simultaneous analysis of the dicarboxylic acids, hydroxy acids and volatile acids that are oxidation and degradation products of lignin and wood polysaccharides. The CZE method was applied to three process cases. First, the fate of lignin under alkaline oxidation conditions was monitored by determining the level of carboxylic acids from process solutions. In the second application, the degradation of spruce wood using alkaline and catalysed alkaline oxidation were compared by determining carboxylic acids from the process solutions. In addition, the effectiveness of membrane filtration and preparative liquid chromatography in the enrichment of hydroxy acids from black liquor was evaluated, by analysing the effluents with capillary electrophoresis.
Resumo:
Middle ear infections (acute otitis media, AOM) are among the most common infectious diseases in childhood, their incidence being greatest at the age of 6–12 months. Approximately 10–30% of children undergo repetitive periods of AOM, referred to as recurrent acute otitis media (RAOM). Middle ear fluid during an AOM episode causes, on average, 20–30 dB of hearing loss lasting from a few days to as much as a couple of months. It is well known that even a mild permanent hearing loss has an effect on language development but so far there is no consensus regarding the consequences of RAOM on childhood language acquisition. The results of studies on middle ear infections and language development have been partly discrepant and the exact effects of RAOM on the developing central auditory nervous system are as yet unknown. This thesis aims to examine central auditory processing and speech production among 2-year-old children with RAOM. Event-related potentials (ERPs) extracted from electroencephalography can be used to objectively investigate the functioning of the central auditory nervous system. For the first time this thesis has utilized auditory ERPs to study sound encoding and preattentive auditory discrimination of speech stimuli, and neural mechanisms of involuntary auditory attention in children with RAOM. Furthermore, the level of phonological development was studied by investigating the number and the quality of consonants produced by these children. Acquisition of consonant phonemes, which are harder to hear than vowels, is a good indicator of the ability to form accurate memory representations of ambient language and has not been studied previously in Finnish-speaking children with RAOM. The results showed that the cortical sound encoding was intact but the preattentive auditory discrimination of multiple speech sound features was atypical in those children with RAOM. Furthermore, their neural mechanisms of auditory attention differed from those of their peers, thus indicating that children with RAOM are atypically sensitive to novel but meaningless sounds. The children with RAOM also produced fewer consonants than their controls. Noticeably, they had a delay in the acquisition of word-medial consonants and the Finnish phoneme /s/, which is acoustically challenging to perceive compared to the other Finnish phonemes. The findings indicate the immaturity of central auditory processing in the children with RAOM, and this might also emerge in speech production. This thesis also showed that the effects of RAOM on central auditory processing are long-lasting because the children had healthy ears at the time of the study. An effective neural network for speech sound processing is a basic requisite of language acquisition, and RAOM in early childhood should be considered as a risk factor for language development.
Resumo:
Productivity and profitability are important concepts and measures describing the performance and success of a firm. We know that increase in productivity decreases the costs per unit produced and leads to better profitability. This common knowledge is not, however, enough in the modern business environment. Productivity improvement is one means among others for increasing the profitability of actions. There are many means to increase productivity. The use of these means presupposes operative decisions and these decisions presuppose informationabout the effects of these means. Productivity improvement actions are in general made at floor level with machines, cells, activities and human beings. Profitability is most meaningful at the level of the whole firm. It has been very difficult or even impossible to analyze closely enough the economical aspects of thechanges at floor level with the traditional costing systems. New ideas in accounting have only recently brought in elements which make it possible to considerthese phenomena where they actually happen. The aim of this study is to supportthe selection of objects to productivity improvement, and to develop a method to analyze the effects of the productivity change in an activity on the profitability of a firm. A framework for systemizing the economical management of productivity improvement is developed in this study. This framework is a systematical way with two stages to analyze the effects of productivity improvement actions inan activity on the profitability of a firm. At the first stage of the framework, a simple selection method which is based on the worth, possibility and the necessity of the improvement actions in each activity is presented. This method is called Urgency Analysis. In the second stage it is analyzed how much a certain change of productivity in an activity affects the profitability of a firm. A theoretical calculation model with which it is possible to analyze the effects of a productivity improvement in monetary values is presented. On the basis of this theoretical model a tool is made for the analysis at the firm level. The usefulness of this framework was empirically tested with the data of the profit center of one medium size Finnish firm which operates in metal industry. It is expressedthat the framework provides valuable information about the economical effects of productivity improvement for supporting the management in their decision making.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
The productivity, quality and cost efficiency of welding work are critical for metal industry today. Welding processes must get more effective and this can be done by mechanization and automation. Those systems are always expensive and they have to pay the investment back. In this case it is really important to optimize the needed intelligence and this way needed automation level, so that a company will get the best profit. This intelligence and automation level was earlier classified in several different ways which are not useful for optimizing the process of automation or mechanization of welding. In this study the intelligence of a welding system is defined in a new way to enable the welding system to produce a weld good enough. In this study a new way is developed to classify and select the internal intelligence level of a welding system needed to produce the weld efficiently. This classification contains the possible need of human work and its effect to the weld and its quality but does not exclude any different welding processes or methods. In this study a totally new way is developed to calculate the best optimization for the needed intelligence level in welding. The target of this optimization is the best possible productivity and quality and still an economically optimized solution for several different cases. This new optimizing method is based on grounds of product type, economical productivity, the batch size of products, quality and criteria of usage. Intelligence classification and optimization were never earlier made by grounds of a made product. Now it is possible to find the best type of welding system needed to welddifferent types of products. This calculation process is a universal way for optimizing needed automation or mechanization level when improving productivity of welding. This study helps the industry to improve productivity, quality and cost efficiency of welding workshops.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
Electrical machine drives are the most electrical energy-consuming systems worldwide. The largest proportion of drives is found in industrial applications. There are, however many other applications that are also based on the use of electrical machines, because they have a relatively high efficiency, a low noise level, and do not produce local pollution. Electrical machines can be classified into several categories. One of the most commonly used electrical machine types (especially in the industry) is induction motors, also known as asynchronous machines. They have a mature production process and a robust rotor construction. However, in the world pursuing higher energy efficiency with reasonable investments not every application receives the advantage of using this type of motor drives. The main drawback of induction motors is the fact that they need slipcaused and thus loss-generating current in the rotor, and additional stator current for magnetic field production along with the torque-producing current. This can reduce the electric motor drive efficiency, especially in low-speed, low-power applications. Often, when high torque density is required together with low losses, it is desirable to apply permanent magnet technology, because in this case there is no need to use current to produce the basic excitation of the machine. This promotes the effectiveness of copper use in the stator, and further, there is no rotor current in these machines. Again, if permanent magnets with a high remanent flux density are used, the air gap flux density can be higher than in conventional induction motors. These advantages have raised the popularity of PMSMs in some challenging applications, such as hybrid electric vehicles (HEV), wind turbines, and home appliances. Usually, a correctly designed PMSM has a higher efficiency and consequently lower losses than its induction machine counterparts. Therefore, the use of these electrical machines reduces the energy consumption of the whole system to some extent, which can provide good motivation to apply permanent magnet technology to electrical machines. However, the cost of high performance rare earth permanent magnets in these machines may not be affordable in many industrial applications, because the tight competition between the manufacturers dictates the rules of low-cost and highly robust solutions, where asynchronous machines seem to be more feasible at the moment. Two main electromagnetic components of an electrical machine are the stator and the rotor. In the case of a conventional radial flux PMSM, the stator contains magnetic circuit lamination and stator winding, and the rotor consists of rotor steel (laminated or solid) and permanent magnets. The lamination itself does not significantly influence the total cost of the machine, even though it can considerably increase the construction complexity, as it requires a special assembly arrangement. However, thin metal sheet processing methods are very effective and economically feasible. Therefore, the cost of the machine is mainly affected by the stator winding and the permanent magnets. The work proposed in this doctoral dissertation comprises a description and analysis of two approaches of PMSM cost reduction: one on the rotor side and the other on the stator side. The first approach on the rotor side includes the use of low-cost and abundant ferrite magnets together with a tooth-coil winding topology and an outer rotor construction. The second approach on the stator side exploits the use of a modular stator structure instead of a monolithic one. PMSMs with the proposed structures were thoroughly analysed by finite element method based tools (FEM). It was found out that by implementing the described principles, some favourable characteristics of the machine (mainly concerning the machine size) will inevitable be compromised. However, the main target of the proposed approaches is not to compete with conventional rare earth PMSMs, but to reduce the price at which they can be implemented in industrial applications, keeping their dimensions at the same level or lower than those of a typical electrical machine used in the industry at the moment. The measurement results of the prototypes show that the main performance characteristics of these machines are at an acceptable level. It is shown that with certain specific actions it is possible to achieve a desirable efficiency level of the machine with the proposed cost reduction methods.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
The increasing use of energy, food, and materials by the growing population in the world is leading to the situation where alternative solutions from renewable carbon resources are sought after. The growing use of plastics depends on the raw-oil production while oil refining are politically governed and required for the polymer manufacturing is not sustainable in terms of carbon footprint. The amount of packaging is also increasing. Packaging is not only utilising cardboard and paper, but also plastics. The synthetic petroleum-derived plastics and inner-coatings in food packaging can be substituted with polymeric material from the renewable resources. The trees in Finnish forests constitute a huge resource, which ought to be utilised more effectively than it is today. One underutilised component of the forests is the wood-derived hemicelluloses, although Spruce Oacetyl-galactoglucomannans (GGMs) have previously shown high potential for material applications and can be recovered in large scale. Hemicelluloses are hydrophilic in their native state, which restrains the use of them for food packaging as non-dry item. To cope with this challenge, we intended to make GGMs more hydrophobic or amphiphilic by chemical grafting and consequently with the focus of using them for barrier applications. Methods of esterification with anhydrides and cationic etherification with a trimethyl ammonium moiety were established. A method of controlled synthesis to obtain the desired properties by the means of altering temperature, reaction time, the quantity of the reagent, and even the solvent for purification of the products was developed. Numerous analytical tools, such as NMR, FTIR, SEC-MALLS/RI, MALDI-TOF-MS, RP-HPLC and polyelectrolyte titration were used to evaluate the products from different perspectives and to acquire parallel proofs of their chemical structure. Modified GGMs with different degree of substitution and the correlating level of hydrophobicity was applied as coatings on cartonboard and on nanofibrillated cellulose-GGM films to exhibit barrier functionality. The water dispersibility in processing was maintained with GGM esters with low DS. The use of chemically functionalised GGM was evaluated for the use as barriers against water, oxygen and grease for the food packaging purposes. The results show undoubtedly that GGM derivatives exhibit high potential to function as a barrier material in food packaging.
Resumo:
Global warming is one of the most alarming problems of this century. Initial scepticism concerning its validity is currently dwarfed by the intensification of extreme weather events whilst the gradual arising level of anthropogenic CO2 is pointed out as its main driver. Most of the greenhouse gas (GHG) emissions come from large point sources (heat and power production and industrial processes) and the continued use of fossil fuels requires quick and effective measures to meet the world’s energy demand whilst (at least) stabilizing CO2 atmospheric levels. The framework known as Carbon Capture and Storage (CCS) – or Carbon Capture Utilization and Storage (CCUS) – comprises a portfolio of technologies applicable to large‐scale GHG sources for preventing CO2 from entering the atmosphere. Amongst them, CO2 capture and mineralisation (CCM) presents the highest potential for CO2 sequestration as the predicted carbon storage capacity (as mineral carbonates) far exceeds the estimated levels of the worldwide identified fossil fuel reserves. The work presented in this thesis aims at taking a step forward to the deployment of an energy/cost effective process for simultaneous capture and storage of CO2 in the form of thermodynamically stable and environmentally friendly solid carbonates. R&D work on the process considered here began in 2007 at Åbo Akademi University in Finland. It involves the processing of magnesium silicate minerals with recyclable ammonium salts for extraction of magnesium at ambient pressure and 400‐440⁰C, followed by aqueous precipitation of magnesium in the form of hydroxide, Mg(OH)2, and finally Mg(OH)2 carbonation in a pressurised fluidized bed reactor at ~510⁰C and ~20 bar PCO2 to produce high purity MgCO3. Rock material taken from the Hitura nickel mine, Finland, and serpentinite collected from Bragança, Portugal, were tested for magnesium extraction with both ammonium sulphate and bisulphate (AS and ABS) for determination of optimal operation parameters, primarily: reaction time, reactor type and presence of moisture. Typical efficiencies range from 50 to 80% of magnesium extraction at 350‐450⁰C. In general ABS performs better than AS showing comparable efficiencies at lower temperature and reaction times. The best experimental results so far obtained include 80% magnesium extraction with ABS at 450⁰C in a laboratory scale rotary kiln and 70% Mg(OH)2 carbonation in the PFB at 500⁰C, 20 bar CO2 pressure for 15 minutes. The extraction reaction with ammonium salts is not at all selective towards magnesium. Other elements like iron, nickel, chromium, copper, etc., are also co‐extracted. Their separation, recovery and valorisation are addressed as well and found to be of great importance. The assessment of the exergetic performance of the process was carried out using Aspen Plus® software and pinch analysis technology. The choice of fluxing agent and its recovery method have a decisive sway in the performance of the process: AS is recovered by crystallisation and in general the whole process requires more exergy (2.48–5.09 GJ/tCO2sequestered) than ABS (2.48–4.47 GJ/tCO2sequestered) when ABS is recovered by thermal decomposition. However, the corrosive nature of molten ABS and operational problems inherent to thermal regeneration of ABS prohibit this route. Regeneration of ABS through addition of H2SO4 to AS (followed by crystallisation) results in an overall negative exergy balance (mainly at the expense of low grade heat) but will flood the system with sulphates. Although the ÅA route is still energy intensive, its performance is comparable to conventional CO2 capture methods using alkanolamine solvents. An energy‐neutral process is dependent on the availability and quality of nearby waste heat and economic viability might be achieved with: magnesium extraction and carbonation levels ≥ 90%, the processing of CO2‐containing flue gases (eliminating the expensive capture step) and production of marketable products.