33 resultados para Computational Geometry and Object Modelling
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
Työn tavoitteena oli perehtyä innovaatiojohtamisen ja järjestelmän soveltamiseen prosessiteollisuuden toimintaympäristöön. Kirjallisuuslähteitä apuna käyttäen perehdyttiin liiketoimintaympäristön innovaatiojohtamiselle asettamiin vaatimuksiin ja erilaisiin innovaatiojärjestelmiin. Olennaisena osana innovaatiojohtamiseen liittyy sidosryhmien tarpeiden ja niiden tarjoamien resurssien huomioiminen toiminnassa. Myöskin tuotekehityksen menetelmät ja työkalut ovat omalta osaltaan merkittävässä asemassa toiminnan tehokkuutta arvioitaessa. Innovaatiojärjestelmä tulee sopeuttaa yrityksen toimintoihin ja sen erityispiirteet huomioonottaen siten, että toiminnan johtaminen prosessina tuo yritykselle ja sen sidosryhmille lisäarvoa. Innovaatiojärjestelmän luominen yritykselle on ainayksilöllinen prosessi ja siihen ei ole olemassa yleispätevää menetelmää, joka voitaisiin ottaa käyttöön sellaisenaan. Yritys, jonka liiketoiminta keskittyy kuitupohjaisten pakkausmateriaalien valmistamiseen, joutuu täyttämään toiminnassaan materiaalintoimittajien, omien tuotantoprosessiensa ja asiakkaiden sekä jopa loppukäyttäjien uusille tuotteille luomat odotukset. Innovaatiojohtamista sävyttää toiminnan tulosten suuri epävarmuus ja sen vaativien aineellisten ja henkisten resurssien mittavuus. Innovaatiotoiminnan johtaminen prosessina, käyttäen hyväksi järjestelmämallia, tavoittelee systemaattista ja asetettujen kriteerien täyttämää lähestymistapaa tuotekehityksen ja uusien liiketoimintainnovaatioiden alueella. Kehitetyn mallin tulee palvella monimutkaista liiketoimintaympäristöä, jokatoisaalta perustuu tehokkaaseen massatuotantoon ja toisaalta pyrkii erilaistumaan palvelemalla sekä huomioimalla asiakkaidensa tuotteille asettamat vaatimukset.
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
The bioavailability of metals and their potential for environmental pollution depends not simply on total concentrations, but is to a great extent determined by their chemical form. Consequently, knowledge of aqueous metal species is essential in investigating potential metal toxicity and mobility. The overall aim of this thesis is, thus, to determine the species of major and trace elements and the size distribution among the different forms (e.g. ions, molecules and mineral particles) in selected metal-enriched Boreal river and estuarine systems by utilising filtration techniques and geochemical modelling. On the basis of the spatial physicochemical patterns found, the fractionation and complexation processes of elements (mainly related to input of humic matter and pH-change) were examined. Dissolved (<1 kDa), colloidal (1 kDa-0.45 μm) and particulate (>0.45 μm) size fractions of sulfate, organic carbon (OC) and 44 metals/metalloids were investigated in the extremely acidic Vörå River system and its estuary in W Finland, and in four river systems in SW Finland (Sirppujoki, Laajoki, Mynäjoki and Paimionjoki), largely affected by soil erosion and acid sulfate (AS) soils. In addition, geochemical modelling was used to predict the formation of free ions and complexes in these investigated waters. One of the most important findings of this study is that the very large amounts of metals known to be released from AS soils (including Al, Ca, Cd, Co, Cu, Mg, Mn, Na, Ni, Si, U and the lanthanoids) occur and can prevail mainly in toxic forms throughout acidic river systems; as free ions and/or sulfate-complexes. This has serious effects on the biota and especially dissolved Al is expected to have acute effects on fish and other organisms, but also other potentially toxic dissolved elements (e.g. Cd, Cu, Mn and Ni) can have fatal effects on the biota in these environments. In upstream areas that are generally relatively forested (higher pH and contents of OC) fewer bioavailable elements (including Al, Cu, Ni and U) may be found due to complexation with the more abundantly occurring colloidal OC. In the rivers in SW Finland total metal concentrations were relatively high, but most of the elements occurred largely in a colloidal or particulate form and even elements expected to be very soluble (Ca, K, Mg, Na and Sr) occurred to a large extent in colloidal form. According to geochemical modelling, these patterns may only to a limited extent be explained by in-stream metal complexation/adsorption. Instead there were strong indications that the high metal concentrations and dominant solid fractions were largely caused by erosion of metal bearing phyllosilicates. A strong influence of AS soils, known to exist in the catchment, could be clearly distinguished in the Sirppujoki River as it had very high concentrations of a metal sequence typical of AS soils in a dissolved form (Ba, Br, Ca, Cd, Co, K, Mg, Mn, Na, Ni, Rb and Sr). In the Paimionjoki River, metal concentrations (including Ba, Cs, Fe, Hf, Pb, Rb, Si, Th, Ti, Tl and V; not typical of AS soils in the area) were high, but it was found that the main cause of this was erosion of metal bearing phyllosilicates and thus these metals occurred dominantly in less toxic colloidal and particulate fractions. In the two nearby rivers (Laajoki and Mynäjoki) there was influence of AS soils, but it was largely masked by eroded phyllosilicates. Consequently, rivers draining clay plains sensitive to erosion, like those in SW Finland, have generally high background metal concentrations due to erosion. Thus, relying on only semi-dissolved (<0.45 μm) concentrations obtained in routine monitoring, or geochemical modelling based on such data, can lead to a great overestimation of the water toxicity in this environment. The potentially toxic elements that are of concern in AS soil areas will ultimately be precipitated in the recipient estuary or sea, where the acidic metalrich river water will gradually be diluted/neutralised with brackish seawater. Along such a rising pH gradient Al, Cu and U will precipitate first together with organic matter closest to the river mouth. Manganese is relatively persistent in solution and, thus, precipitates further down the estuary as Mn oxides together with elements such as Ba, Cd, Co, Cu and Ni. Iron oxides, on the contrary, are not important scavengers of metals in the estuary, they are predicted to be associated only with As and PO4.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
Keyhole welding, meaning that the laser beam forms a vapour cavity inside the steel, is one of the two types of laser welding processes and currently it is used in few industrial applications. Modern high power solid state lasers are becoming more used generally, but not all process fundamentals and phenomena of the process are well known and understanding of these helps to improve quality of final products. This study concentrates on the process fundamentals and the behaviour of the keyhole welding process by the means of real time high speed x-ray videography. One of the problem areas in laser welding has been mixing of the filler wire into the weld; the phenomena are explained and also one possible solution for this problem is presented in this study. The argument of this thesis is that the keyhole laser welding process has three keyhole modes that behave differently. These modes are trap, cylinder and kaleidoscope. Two of these have sub-modes, in which the keyhole behaves similarly but the molten pool changes behaviour and geometry of the resulting weld is different. X-ray videography was used to visualize the actual keyhole side view profile during the welding process. Several methods were applied to analyse and compile high speed x-ray video data to achieve a clearer image of the keyhole side view. Averaging was used to measure the keyhole side view outline, which was used to reconstruct a 3D-model of the actual keyhole. This 3D-model was taken as basis for calculation of the vapour volume inside of the keyhole for each laser parameter combination and joint geometry. Four different joint geometries were tested, partial penetration bead on plate and I-butt joint and full penetration bead on plate and I-butt joint. The comparison was performed with selected pairs and also compared all combinations together.
Resumo:
This thesis addresses the coolability of porous debris beds in the context of severe accident management of nuclear power reactors. In a hypothetical severe accident at a Nordic-type boiling water reactor, the lower drywell of the containment is flooded, for the purpose of cooling the core melt discharged from the reactor pressure vessel in a water pool. The melt is fragmented and solidified in the pool, ultimately forming a porous debris bed that generates decay heat. The properties of the bed determine the limiting value for the heat flux that can be removed from the debris to the surrounding water without the risk of re-melting. The coolability of porous debris beds has been investigated experimentally by measuring the dryout power in electrically heated test beds that have different geometries. The geometries represent the debris bed shapes that may form in an accident scenario. The focus is especially on heap-like, realistic geometries which facilitate the multi-dimensional infiltration (flooding) of coolant into the bed. Spherical and irregular particles have been used to simulate the debris. The experiments have been modeled using 2D and 3D simulation codes applicable to fluid flow and heat transfer in porous media. Based on the experimental and simulation results, an interpretation of the dryout behavior in complex debris bed geometries is presented, and the validity of the codes and models for dryout predictions is evaluated. According to the experimental and simulation results, the coolability of the debris bed depends on both the flooding mode and the height of the bed. In the experiments, it was found that multi-dimensional flooding increases the dryout heat flux and coolability in a heap-shaped debris bed by 47–58% compared to the dryout heat flux of a classical, top-flooded bed of the same height. However, heap-like beds are higher than flat, top-flooded beds, which results in the formation of larger steam flux at the top of the bed. This counteracts the effect of the multi-dimensional flooding. Based on the measured dryout heat fluxes, the maximum height of a heap-like bed can only be about 1.5 times the height of a top-flooded, cylindrical bed in order to preserve the direct benefit from the multi-dimensional flooding. In addition, studies were conducted to evaluate the hydrodynamically representative effective particle diameter, which is applied in simulation models to describe debris beds that consist of irregular particles with considerable size variation. The results suggest that the effective diameter is small, closest to the mean diameter based on the number or length of particles.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Meandering rivers have been perceived to evolve rather similarly around the world independently of the location or size of the river. Despite the many consistent processes and characteristics they have also been noted to show complex and unique sets of fluviomorphological processes in which local factors play important role. These complex interactions of flow and morphology affect notably the development of the river. Comprehensive and fundamental field, flume and theoretically based studies of fluviomorphological processes in meandering rivers have been carried out especially during the latter part of the 20th century. However, as these studies have been carried out with traditional field measurements techniques their spatial and temporal resolution is not competitive to the level achievable today. The hypothesis of this study is that, by exploiting e increased spatial and temporal resolution of the data, achieved by combining conventional field measurements with a range of modern technologies, will provide new insights to the spatial patterns of the flow-sediment interaction in meandering streams, which have perceived to show notable variation in space and time. This thesis shows how the modern technologies can be combined to derive very high spatial and temporal resolution data on fluvio-morphological processes over meander bends. The flow structure over the bends is recorded in situ using acoustic Doppler current profiler (ADCP) and the spatial and temporal resolution of the flow data is enhanced using 2D and 3D CFD over various meander bends. The CFD are also exploited to simulate sediment transport. Multi-temporal terrestrial laser scanning (TLS), mobile laser scanning (MLS) and echo sounding data are used to measure the flow-based changes and formations over meander bends and to build the computational models. The spatial patterns of erosion and deposition over meander bends are analysed relative to the measured and modelled flow field and sediment transport. The results are compared with the classic theories of the processes in meander bends. Mainly, the results of this study follow well the existing theories and results of previous studies. However, some new insights regarding to the spatial and temporal patterns of the flow-sediment interaction in a natural sand-bed meander bend are provided. The results of this study show the advantages of the rapid and detailed measurements techniques and the achieved spatial and temporal resolution provided by CFD, unachievable with field measurements. The thesis also discusses the limitations which remain in the measurement and modelling methods and in understanding of fluvial geomorphology of meander bends. Further, the hydro- and morphodynamic models’ sensitivity to user-defined parameters is tested, and the modelling results are assessed against detailed field measurement. The study is implemented in the meandering sub-Arctic Pulmanki River in Finland. The river is unregulated and sand-bed and major morphological changes occur annually on the meander point bars, which are inundated only during the snow-melt-induced spring floods. The outcome of this study applies to sandbed meandering rivers in regions where normally one significant flood event occurs annually, such as Arctic areas with snow-melt induced spring floods, and where the point bars of the meander bends are inundated only during the flood events.
Resumo:
In the paper machine, it is not a desired feature for the boundary layer flows in the fabric and the roll surfaces to travel into the closing nips, creating overpressure. In this thesis, the aerodynamic behavior of the grooved roll and smooth rolls is compared in order to understand the nip flow phenomena, which is the main reason why vacuum and grooved roll constructions are designed. A common method to remove the boundary layer flow from the closing nip is to use the vacuum roll construction. The downside of the use of vacuum rolls is high operational costs due to pressure losses in the vacuum roll shell. The deep grooved roll has the same goal, to create a pressure difference over the paper web and keep the paper attached to the roll or fabric surface in the drying pocket of the paper machine. A literature review revealed that the aerodynamic functionality of the grooved roll is not very well known. In this thesis, the aerodynamic functionality of the grooved roll in interaction with a permeable or impermeable wall is studied by varying the groove properties. Computational fluid dynamics simulations are utilized as the research tool. The simulations have been performed with commercial fluid dynamics software, ANSYS Fluent. Simulation results made with 3- and 2-dimensional fluid dynamics models are compared to laboratory scale measurements. The measurements have been made with a grooved roll simulator designed for the research. The variables in the comparison are the paper or fabric wrap angle, surface velocities, groove geometry and wall permeability. Present-day computational and modeling resources limit grooved roll fluid dynamics simulations in the paper machine scale. Based on the analysis of the aerodynamic functionality of the grooved roll, a grooved roll simulation tool is proposed. The smooth roll simulations show that the closing nip pressure does not depend on the length of boundary layer development. The surface velocity increase affects the pressure distribution in the closing and opening nips. The 3D grooved roll model reveals the aerodynamic functionality of the grooved roll. With the optimal groove size it is possible to avoid closing nip overpressure and keep the web attached to the fabric surface in the area of the wrap angle. The groove flow friction and minor losses play a different role when the wrap angle is changed. The proposed 2D grooved roll simulation tool is able to replicate the grooved aerodynamic behavior with reasonable accuracy. A small wrap angle predicts the pressure distribution correctly with the chosen approach for calculating the groove friction losses. With a large wrap angle, the groove friction loss shows too large pressure gradients, and the way of calculating the air flow friction losses in the groove has to be reconsidered. The aerodynamic functionality of the grooved roll is based on minor and viscous losses in the closing and opening nips as well as in the grooves. The proposed 2D grooved roll model is a simplification in order to reduce computational and modeling efforts. The simulation tool makes it possible to simulate complex paper machine constructions in the paper machine scale. In order to use the grooved roll as a replacement for the vacuum roll, the grooved roll properties have to be considered on the basis of the web handling application.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.