50 resultados para Synthetic aperture techniques
Resumo:
Den snart 200 år gamla vetenskapsgrenen organisk synteskemi har starkt bidragit till moderna samhällens välfärd. Ett av flaggskeppen för den organiska synteskemin är utvecklingen och produktionen av nya läkemedel och speciellt de aktiva substanserna däri. Därmed är det viktigt att utveckla nya syntesmetoder, som kan tillämpas vid framställningen av farmaceutiskt relevanta målstrukturer. I detta sammanhang är den ultimata målsättningen dock inte endast en lyckad syntes av målmolekylen, utan det är allt viktigare att utveckla syntesrutter som uppfyller kriterierna för den hållbara utvecklingen. Ett av de centralaste verktygen som en organisk kemist har till förfogande i detta sammanhang är katalys, eller mera specifikt möjligheten att tillämpa olika katalytiska reaktioner vid framställning av komplexa målstrukturer. De motsvarande industriella processerna karakteriseras av hög effektivitet och minimerad avfallsproduktion, vilket naturligtvis gynnar den kemiska industrin samtidigt som de negativa miljöeffekterna minskas avsevärt. I denna doktorsavhandling har nya syntesrutter för produktion av finkemikalier med farmaceutisk relevans utvecklats genom att kombinera förhållandevis enkla transformationer till nya reaktionssekvenser. Alla reaktionssekvenser som diskuteras i denna avhandling påbörjades med en metallförmedlad allylering av utvalda aldehyder eller aldiminer. De erhållna produkterna innehållende en kol-koldubbelbindning med en närliggande hydroxyl- eller aminogrupp modifierades sedan vidare genom att tillämpa välkända katalytiska reaktioner. Alla syntetiserade molekyler som presenteras i denna avhandling karakteriseras som finkemikalier med hög potential vid farmaceutiska tillämpningar. Utöver detta tillämpades en mängd olika katalytiska reaktioner framgångsrikt vid syntes av dessa molekyler, vilket i sin tur förstärker betydelsen för de katalytiska verktygen i organiska kemins verktygslåda.
Resumo:
Att övervaka förekomsten av giftiga komponenter i naturliga vattendrag är nödvändigt för människans välmående. Eftersom halten av föroreningar i naturens ekosystem bör hållas möjligast låg, pågår en ständig jakt efter kemiska analysmetoder med allt lägre detektionsgränser. I dagens läge görs miljöanalyser med dyr och sofistikerad instrumentering som kräver mycket underhåll. Jonselektiva elektroder har flera goda egenskaper som t.ex. bärbarhet, låg energiförbrukning, och dessutom är de relativt kostnadseffektiva. Att använda jonselektiva elektroder vid miljöanalyser är möjligt om deras känslighetsområde kan utvidgas genom att sänka deras detektionsgränser. För att sänka detektionsgränsen för Pb(II)-selektiva elektroder undersöktes olika typer av jonselektiva membran som baserades på polyakrylat-kopolymerer, PVC och PbS/Ag2S. Fast-fas elektroder med membran av PbS/Ag2S är i allmänhet enklare och mer robusta än konventionella elektroder vid spårämnesanalys av joniska föroreningar. Fast-fas elektrodernas detektionsgräns sänktes i detta arbete med en nyutvecklad galvanostatisk polariseringsmetod och de kunde sedan framgångsrikt användas för kvantitativa bestämningar av bly(II)-halter i miljöprov som hade samlats in i den finska skärgården nära tidigare industriområden. Analysresultaten som erhölls med jonselektiva elektroder bekräftades med andra analytiska metoder. Att sänka detektionsgränsen m.hj.a. den nyutvecklade polariseringsmetoden möjliggör bestämning av låga och ultra-låga blyhalter som inte kunde nås med klassisk potentiometri. Den verkliga fördelen med att använda dessa blyselektiva elektroder är möjligheten att utföra mätningar i obehandlade miljöprov trots närvaron av fasta partiklar vilket inte är möjligt att göra med andra analysmetoder. Jag väntar mig att den nyutvecklade polariseringsmetoden kommer att sätta en trend i spårämnesanalys med jonselektiva elektroder.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
The dewatering of iron ore concentrates requires large capacity in addition to producing a cake with low moisture content. Such large processes are commonly energy intensive and means to lower the specific energy consumption are needed. Ceramic capillary action disc filters incorporate a novel filter medium enabling the harnessing of capillary action, which results in decreased energy consumption in comparison to traditional filtration technologies. As another benefit, the filter medium is mechanically and chemically more durable than, for example, filter cloths and can, thus, withstand harsh operating conditions and possible regeneration better than other types of filter media. In iron ore dewatering, the regeneration of the filter medium is done through a combination of several techniques: (1) backwashing, (2) ultrasonic cleaning, and (3) acid regeneration. Although it is commonly acknowledged that the filter medium is affected by slurry particles and extraneous compounds, published research, especially in the field of dewatering of mineral concentrates, is scarce. Whereas the regenerative effect of backwashing and ultrasound are more or less mechanical, regeneration with acids is based on chemistry. The chemistry behind the acid regeneration is, naturally, dissolution. The dissolution of iron oxide particles has been extensively studied over several decades but those studies may not necessarily be directly applicable in the regeneration of the filter medium which has undergone interactions with the slurry components. The aim of this thesis was to investigate if free particle dissolution indeed correlates with the regeneration of the filter medium. For this purpose, both free particle dissolution and dissolution of surface adhered particles were studied. The focus was on acidic dissolution of iron oxide particles and on the study of the ceramic filter medium used in the dewatering of iron ore concentrates. The free particle dissolution experiments show that the solubility of synthetic fine grained iron oxide particles in oxalic acid could be explained through linear models accounting for the effects of temperature and acid concentration, whereas the dissolution of a natural magnetite is not so easily explained by such models. In addition, the kinetic experiments performed both support and contradict the work of previous authors: the suitable kinetic model here supports previous research suggesting solid state reduction to be the reaction mechanism of hematite dissolution but the formation of a stable iron oxalate is not supported by the results of this research. Several other dissolution mechanisms have also been suggested for iron oxide dissolution in oxalic acid, indicating that the details of oxalate promoted reductive dissolution are not yet agreed and, in this respect, this research offers added value to the community. The results of the regeneration experiments with the ceramic filter media show that oxalic acid is highly effective in removing iron oxide particles from the surface of the filter medium. The dissolution of those particles did not, however, exhibit the expected behaviour, i.e. complete dissolution. The results of this thesis show that although the regeneration of the ceramic filter medium with acids incorporates the dissolution of slurry particles from the surface of the filter medium, the regeneration cannot be assessed purely based upon free particle dissolution. A steady state, dependent on temperature and on the acid concentration, was observed in the dissolution of particles from the surface even though the limit of solubility of free iron oxide particles had not been reached. Both the regeneration capacity and efficiency, with regards to the removal of iron oxide particles, was found to be temperature dependent, but was not affected by the acid concentration. This observation further suggests that the removal of the surface adhered particles does not follow the dissolution of free particles, which do exhibit a dependency on the acid concentration. In addition, changes in the permeability and in the pore structure of the filter medium were still observed after the bulk concentration of dissolved iron had reached a steady state. Consequently, the regeneration of the filter medium continued after the dissolution of particles from the surface had ceased. This observation suggests that internal changes take place at the final stages of regeneration. The regeneration process could, in theory, be divided into two, possibly overlapping, stages: (1) dissolution of surface-adhered particles, and (2) dissolution of extraneous compounds from within the pore structure. In addition to the fundamental knowledge generated during this thesis, tools to assess the effects of parameters on the regeneration of the ceramic filter medium are needed. It has become clear that the same tools used to estimate the dissolution of free particles cannot be used to estimate the regeneration of a filter medium unless only a robust characterisation of the order of regeneration efficiency is needed.
Resumo:
Interest in water treatment by electrochemical methods has grown in recent years. Electrochemical oxidation has been applied particularly successfully to degrade different organic pollutants and disinfect drinking water. This study summarizes the effectiveness of the electrochemical oxidation technique in inactivating different primary biofilm forming paper mill bacteria as well as sulphide and organic material in pulp and paper mill wastewater in laboratory scale batch experiments. Three different electrodes, borondoped diamond (BDD), mixed metal oxide (MMO) and PbO2, were employed as anodes. The impact on inactivation efficiency of parameters such as current density and initial pH or chloride concentration of synthetic paper machine water was studied. The electrochemical behaviour of the electrodes was investigated by cyclic voltammetry with MMO, BDD and PbO2 electrodes in synthetic paper mill water as also with MMO and stainless steel electrodes with biocides. Some suggestions on the formation of different oxidants and oxidation mechanisms were also presented during the treatment. Aerobic paper mill bacteria species (Deinococcus geothermalis, Pseudoxanthomonas taiwanensis and Meiothermus silvanus) were inactivated effectively (>2 log) at MMO electrodes by current density of 50 mA/cm2 and the time taken three minutes. Increasing current density and initial chloride concentration of paper mill water increased the inactivation rate of Deinococcus geothermalis. The inactivation order of different bacteria species was Meiothermus silvanus > Pseudoxanthomonas taiwanensis > Deinococcus geothermalis. It was observed that inactivation was mainly due to the electrochemically generated chlorine/hypochlorite from chloride present in the water and also residual disinfection by chlorine/hypochlorite occurred. In real paper mill effluent treatment sulphide oxidation was effective with all the different initial concentrations (almost 100% reduction, current density 42.9 mA/cm2) and also anaerobic bacteria inactivation was observed (almost 90% reduction by chloride concentration of 164 mg/L and current density of 42.9 mA/cm2 in five minutes). Organic material removal was not as effective when comparing with other tested techniques, probably due to the relatively low treatment times. Cyclic voltammograms in synthetic paper mill water with stainless steel electrode showed that H2O2 could be degraded to radicals during the cathodic runs. This emphasises strong potential of combined electrochemical treatment with this biocide in bacteria inactivation in paper mill environments.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
Switching power supplies are usually implemented with a control circuitry that uses constant clock frequency turning the power semiconductor switches on and off. A drawback of this customary operating principle is that the switching frequency and harmonic frequencies are present in both the conducted and radiated EMI spectrum of the power converter. Various variable-frequency techniques have been introduced during the last decade to overcome the EMC problem. The main objective of this study was to compare the EMI and steady-state performance of a switch mode power supply with different spread-spectrum/variable-frequency methods. Another goal was to find out suitable tools for the variable-frequency EMI analysis. This thesis can be divided into three main parts: Firstly, some aspects of spectral estimation and measurement are presented. Secondly, selected spread spectrum generation techniques are presented with simulations and background information. Finally, simulations and prototype measurements from the EMC and the steady-state performance are carried out in the last part of this work. Combination of the autocorrelation function, the Welch spectrum estimate and the spectrogram were used as a substitute for ordinary Fourier methods in the EMC analysis. It was also shown that the switching function can be used in preliminary EMC analysis of a SMPS and the spectrum and autocorrelation sequence of a switching function correlates with the final EMI spectrum. This work is based on numerous simulations and measurements made with the prototype. All these simulations and measurements are made with the boost DC/DC converter. Four different variable-frequency modulation techniques in six different configurations were analyzed and the EMI performance was compared to the constant frequency operation. Output voltage and input current waveforms were also analyzed in time domain to see the effect of the spread spectrum operation on these quantities. According to the results presented in this work, spread spectrum modulation can be utilized in power converter for EMI mitigation. The results from steady-state voltage measurements show, that the variable-frequency operation of the SMPS has effect on the voltage ripple, but the ripple measured from the prototype is still acceptable in some applications. Both current and voltage ripple can be controlled with proper main circuit and controller design.
Resumo:
Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.
Resumo:
In this thesis, stepwise titration with hydrochloric acid was used to obtain chemical reactivities and dissolution rates of ground limestones and dolostones of varying geological backgrounds (sedimentary, metamorphic or magmatic). Two different ways of conducting the calculations were used: 1) a first order mathematical model was used to calculate extrapolated initial reactivities (and dissolution rates) at pH 4, and 2) a second order mathematical model was used to acquire integrated mean specific chemical reaction constants (and dissolution rates) at pH 5. The calculations of the reactivities and dissolution rates were based on rate of change of pH and particle size distributions of the sample powders obtained by laser diffraction. The initial dissolution rates at pH 4 were repeatedly higher than previously reported literature values, whereas the dissolution rates at pH 5 were consistent with former observations. Reactivities and dissolution rates varied substantially for dolostones, whereas for limestones and calcareous rocks, the variation can be primarily explained by relatively large sample standard deviations. A list of the dolostone samples in a decreasing order of initial reactivity at pH 4 is: 1) metamorphic dolostones with calcite/dolomite ratio higher than about 6% 2) sedimentary dolostones without calcite 3) metamorphic dolostones with calcite/dolomite ratio lower than about 6% The reactivities and dissolution rates were accompanied by a wide range of experimental techniques to characterise the samples, to reveal how different rocks changed during the dissolution process, and to find out which factors had an influence on their chemical reactivities. An emphasis was put on chemical and morphological changes taking place at the surfaces of the particles via X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Supporting chemical information was obtained with X-Ray Fluorescence (XRF) measurements of the samples, and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) and Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) measurements of the solutions used in the reactivity experiments. Information on mineral (modal) compositions and their occurrence was provided by X-Ray Diffraction (XRD), Energy Dispersive X-ray analysis (EDX) and studying thin sections with a petrographic microscope. BET (Brunauer, Emmet, Teller) surface areas were determined from nitrogen physisorption data. Factors increasing chemical reactivity of dolostones and calcareous rocks were found to be sedimentary origin, higher calcite concentration and smaller quartz concentration. Also, it is assumed that finer grain size and larger BET surface areas increase the reactivity although no certain correlation was found in this thesis. Atomic concentrations did not correlate with the reactivities. Sedimentary dolostones, unlike metamorphic ones, were found to have porous surface structures after dissolution. In addition, conventional (XPS) and synchrotron based (HRXPS) X-ray Photoelectron Spectroscopy were used to study bonding environments on calcite and dolomite surfaces. Both samples are insulators, which is why neutralisation measures such as electron flood gun and a conductive mask were used. Surface core level shifts of 0.7 ± 0.1 eV for Ca 2p spectrum of calcite and 0.75 ± 0.05 eV for Mg 2p and Ca 3s spectra of dolomite were obtained. Some satellite features of Ca 2p, C 1s and O 1s spectra have been suggested to be bulk plasmons. The origin of carbide bonds was suggested to be beam assisted interaction with hydrocarbons found on the surface. The results presented in this thesis are of particular importance for choosing raw materials for wet Flue Gas Desulphurisation (FGD) and construction industry. Wet FGD benefits from high reactivity, whereas construction industry can take advantage of slow reactivity of carbonate rocks often used in the facades of fine buildings. Information on chemical bonding environments may help to create more accurate models for water-rock interactions of carbonates.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.