28 resultados para Digital Surface Model (DSM)
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
The control of coating layer properties is becoming increasingly important as a result of an emerging demand for novel coated paper-based products and an increasing popularity of new coating application methods. The governing mechanisms of microstructure formation dynamics during consolidation and drying are nevertheless, still poorly understood. Some of the difficulties encountered by experimental methods can be overcome by the utilisation of numerical modelling and simulation-based studies of the consolidation process. The objective of this study was to improve the fundamental understanding of pigment coating consolidation and structure formation mechanisms taking place on the microscopic level. Furthermore, it is aimed to relate the impact of process and suspension properties to the microstructure of the coating layer. A mathematical model based on a modified Stokesian dynamics particle simulation technique was developed and applied in several studies of consolidation-related phenomena. The model includes particle-particle and particle-boundary hydrodynamics, colloidal interactions, Born repulsion, and a steric repulsion model. The Brownian motion and a free surface model were incorporated to enable the specific investigation of consolidation and drying. Filter cake stability was simulated in various particle systems, and subjected to a range of base substrate absorption rates and system temperatures. The stability of the filter cake was primarily affected by the absorption rate and size of particles. Temperature was also shown to have an influence. The consolidation of polydisperse systems, with varying wet coating thicknesses, was studied using imposed pilot trial and model-based drying conditions. The results show that drying methods have a clear influence on the microstructure development, on small particle distributions in the coating layer and also on the mobility of particles during consolidation. It is concluded that colloidal properties can significantly impact coating layer shrinkage as well as the internal solids concentration profile. Visualisations of particle system development in time and comparison of systems at different conditions are useful in illustrating coating layer structure formation mechanisms. The results aid in understanding the underlying mechanisms of pigment coating layer consolidation. Guidance is given regarding the relationship between coating process conditions and internal coating slurry properties and their effects on the microstructure of the coating.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
Additive manufacturing (shortened as AM), or more commonly 3D printing, consists of wide variety of different modern manufacturing technologies. AM is based on direct printing of a digital 3D model to a final product which is fabricated adding material layer by layer. This is from where term additive manufacturing has its origin. It is not only material what is added, but it is also value, properties etc. which are added. AM enables production of different and even better products compared to conventional manufacturing technologies. An estimation of potential of additive manufacturing can be gathered by considering the potential of laser cutting, which is one of the most widely used modern manufacturing technologies. This technique has been used over 40 years, and whole market around this technology is at the moment c. four billion euros and yearly growth is around 10 %. One factor affecting this success of laser cutting is that laser cutting enables radical improvements to products made of flat sheet. AM and 3D printing will do the same for three dimensional parts. Laser devices, which are at the moment used in 3D printing, are globally at the moment only around 1% of all laser devices used in any fabrication technology, so even with a cautious estimate the potential growth of at least 100 % is coming in next few years. Role of education is very important, when this kind of modern technology is industrially implemented. When both generation entering to work life and also generation who has been a while in work life understands new technology, its potential and limitations, this is the point when also product design can be rethought Potential of product design is driving force for wide use of additive manufacturing and 3D printing. Utilization of additive manufacturing and 3D printing is also opportunity for Finland and Finnish industry. This technology can save Finnish manufacturing industry. This technique has stron potential, as Finland has traditionally strong industrial know-how and good ICT knowledge.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Several bioaffinity assays are based on the detection of an analyte which is bound on a solid substrate via biochemical interaction. These so called solid phase assays are based on the adhesion of the primary binding partner on a solid surface, which then binds the analyte to be detected. In this thesis work a novel solid phase based assay technology, known as spot technology, was developed. The spot technology is based on combination of high-capacity solid phases, concentrated in a spot format, utilising modified streptavidin molecules and recombinant antibody fragments. The reduction of the solid phase binding surface to a size of a spot enabled denser binding of the target molecules, providing improved signal intensities and signal-to-background ratio when applied in different solid phase immunoassays. Streptavidin-biotin interactions are commonly utilised in numerous different bioaffinity assays and the ultimate nature of streptavidin to bind biotin is among the strongest non-covalent interaction reported between two biomolecules. In this study native core streptavidin was chemically modified to provide polymerised streptavidin molecules with altered adsorption properties. These streptavidin conjugates, when coated onto polystyrene surface, provided enhanced biotin binding capacity and surface stability when compared to a reference coating constructed with native streptavidin. Furthermore, the combination of chemically modified streptavidin, sitespecifically biotinylated antibody fragments and the spot coating technology provided highly dense solid phase coating with improved binding properties. The performance of the spot assay technology was further demonstrated in different immunoassay configurations. Human thyroid stimulating hormone (TSH) and human cardiac troponin I (cTnI) were used as model analytes to show the applicability of the highly sensitive spot-based solid-phase immunoassay for detection of very low levels of analytes. It was demonstrated that the spot technology provided an assay concept with enhanced sensitivity and short turn-around times, characteristics that are highly suitable for point-of-care applications.
Resumo:
The goal of this thesis is to implement software for creating 3D models from point clouds. Point clouds are acquired with stereo cameras, monocular systems or laser scanners. The created 3D models are triangular models or NURBS (Non-Uniform Rational B-Splines) models. Triangular models are constructed from selected areas from the point clouds and resulted triangular models are translated into a set of quads. The quads are further translated into an estimated grid structure and used for NURBS surface approximation. Finally, we have a set of NURBS surfaces which represent the whole model. The problem wasn’t so easy to solve. The selected triangular surface reconstruction algorithm did not deal well with noise in point clouds. To handle this problem, a clustering method is introduced for simplificating the model and removing noise. As we had better results with the smaller point clouds produced by clustering, we used points in clusters to better estimate the grids for NURBS models. The overall results were good when the point cloud did not have much noise. The point clouds with small amount of error had good results as the triangular model was solid. NURBS surface reconstruction performed well on solid models.
Resumo:
Peer-to-Peer (P2P) technology has revolutionized file exchange activities besides enhancing processing power distribution. As such, this technology which is nowadays made freely available to all internet users also imposes a threat as it enables the illegal distribution of copyrighted digital work. P2P technology continuously evolves in a greater pace than copyright legislation, leading to compatibility gaps between the applicability of copyright law and the illicit file sharing and downloading. Such issues give high incentives to consumers to practise piracy using P2P systems with a low perception of risk towards prosecution, leading to substantial losses for copyright owners. This study focuses on developing insights for content owners on consumer behaviour towards piracy in Finland, where quantitative analyses are assessed using a data set based on a survey conducted by the Helsinki Institute for IT. The research approach investigates the significance of three fundamental areas in relation to evaluate consumer behaviour as: environmental-related factors, innovation-related factors and consumer-related. each of these are integrates concepts derived in previous theoretical models such as the technology acceptance model, theory of reasoned action, theory of planned behaviour, the issue-risk-judgement model and the Hunt & Vitell’s model.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The development of load-bearing osseous implant with desired mechanical and surface properties in order to promote incorporation with bone and to eliminate risk of bone resorption and implant failure is a very challenging task. Bone formation and resoption processes depend on the mechanical environment. Certain stress/strain conditions are required to promote new bone growth and to prevent bone mass loss. Conventional metallic implants with high stiffness carry most of the load and the surrounding bone becomes virtually unloaded and inactive. Fibre-reinforced composites offer an interesting alternative to metallic implants, because their mechanical properties can be tailored to be equal to those of bone, by the careful selection of matrix polymer, type of fibres, fibre volume fraction, orientation and length. Successful load transfer at bone-implant interface requires proper fixation between the bone and implant. One promising method to promote fixation is to prepare implants with porous surface. Bone ingrowth into porous surface structure stabilises the system and improves clinical success of the implant. The experimental part of this work was focused on polymethyl methacrylate (PMMA) -based composites with dense load-bearing core and porous surface. Three-dimensionally randomly orientated chopped glass fibres were used to reinforce the composite. A method to fabricate those composites was developed by a solvent treatment technique and some characterisations concerning the functionality of the surface structure were made in vitro and in vivo. Scanning electron microscope observations revealed that the pore size and interconnective porous architecture of the surface layer of the fibre-reinforced composite (FRC) could be optimal for bone ingrowth. Microhardness measurements showed that the solvent treatment did not have an effect on the mechanical properties of the load-bearing core. A push-out test, using dental stone as a bone model material, revealed that short glass fibre-reinforced porous surface layer is strong enough to carry load. Unreacted monomers can cause the chemical necrosis of the tissue, but the levels of leachable resisidual monomers were considerably lower than those found in chemically cured fibre-reinforced dentures and in modified acrylic bone cements. Animal experiments proved that surface porous FRC implant can enhance fixation between bone and FRC. New bone ingrowth into the pores was detected and strong interlocking between bone and the implant was achieved.
Resumo:
The objective of this thesis was to study the removal of gases from paper mill circulation waters experimentally and to provide data for CFD modeling. Flow and bubble size measurements were carried out in a laboratory scale open gas separation channel. Particle Image Velocimetry (PIV) technique was used to measure the gas and liquid flow fields, while bubble size measurements were conducted using digital imaging technique with back light illumination. Samples of paper machine waters as well as a model solution were used for the experiments. The PIV results show that the gas bubbles near the feed position have the tendency to escape from the circulation channel at a faster rate than those bubbles which are further away from the feed position. This was due to an increased rate of bubble coalescence as a result of the relatively larger bubbles near the feed position. Moreover, a close similarity between the measured slip velocities of the paper mill waters and that of literature values was obtained. It was found that due to dilution of paper mill waters, the observed average bubble size was considerably large as compared to the average bubble sizes in real industrial pulp suspension and circulation waters. Among the studied solutions, the model solution has the highest average drag coefficient value due to its relatively high viscosity. The results were compared to a 2D steady sate CFD simulation model. A standard Euler-Euler k-ε turbulence model was used in the simulations. The channel free surface was modeled as a degassing boundary. From the drag models used in the simulations, the Grace drag model gave velocity fields closest to the experimental values. In general, the results obtained from experiments and CFD simulations are in good qualitative agreement.
Resumo:
This study was conducted in order to learn how companies’ revenue models will be transformed due to the digitalisation of its products and processes. Because there is still only a limited number of researches focusing solely on revenue models, and particularly on the revenue model change caused by the changes at the business environment, the topic was initially approached through the business model concept, which organises the different value creating operations and resources at a company in order to create profitable revenue streams. This was used as the base for constructing the theoretical framework for this study, used to collect and analyse the information. The empirical section is based on a qualitative study approach and multiple-case analysis of companies operating in learning materials publishing industry. Their operations are compared with companies operating in other industries, which have undergone comparable transformation, in order to recognise either similarities or contrasts between the cases. The sources of evidence are a literature review to find the essential dimensions researched earlier, and interviews 29 of managers and executives at 17 organisations representing six industries. Based onto the earlier literature and the empirical findings of this study, the change of the revenue model is linked with the change of the other dimen-sions of the business model. When one dimension will be altered, as well the other should be adjusted accordingly. At the case companies the transformation is observed as the utilisation of several revenue models simultaneously and the revenue creation processes becoming more complex.
Resumo:
Novel biomaterials are needed to fill the demand of tailored bone substitutes required by an ever‐expanding array of surgical procedures and techniques. Wood, a natural fiber composite, modified with heat treatment to alter its composition, may provide a novel approach to the further development of hierarchically structured biomaterials. The suitability of wood as a model biomaterial as well as the effects of heat treatment on the osteoconductivity of wood was studied by placing untreated and heat‐treated (at 220 C , 200 degrees and 140 degrees for 2 h) birch implants (size 4 x 7mm) into drill cavities in the distal femur of rabbits. The follow‐up period was 4, 8 and 20 weeks in all in vivo experiments. The flexural properties of wood as well as dimensional changes and hydroxyl apatite formation on the surface of wood (untreated, 140 degrees C and 200 degrees C heat‐treated wood) were tested using 3‐point bending and compression tests and immersion in simulated body fluid. The effect of premeasurement grinding and the effect of heat treatment on the surface roughness and contour of wood were tested with contact stylus and non‐contact profilometry. The effects of heat treatment of wood on its interactions with biological fluids was assessed using two different test media and real human blood in liquid penetration tests. The results of the in vivo experiments showed implanted wood to be well tolerated, with no implants rejected due to foreign body reactions. Heat treatment had significant effects on the biocompatibility of wood, allowing host bone to grow into tight contact with the implant, with occasional bone ingrowth into the channels of the wood implant. The results of the liquid immersion experiments showed hydroxyl apatite formation only in the most extensively heat‐treated wood specimens, which supported the results of the in vivo experiments. Parallel conclusions could be drawn based on the results of the liquid penetration test where human blood had the most favorable interaction with the most extensively heat‐treated wood of the compared materials (untreated, 140 degrees C and 200 degrees C heat‐treated wood). The increasing biocompatibility was inferred to result mainly from changes in the chemical composition of wood induced by the heat treatment, namely the altered arrangement and concentrations of functional chemical groups. However, the influence of microscopic changes in the cell walls, surface roughness and contour cannot be totally excluded. The heat treatment was hypothesized to produce a functional change in the liquid distribution within wood, which could have biological relevance. It was concluded that the highly evolved hierarchical anatomy of wood could yield information for the future development of bulk bone substitutes according to the ideology of bioinspiration. Furthermore, the results of the biomechanical tests established that heat treatment alters various biologically relevant mechanical properties of wood, thus expanding the possibilities of wood as a model material, which could include e.g. scaffold applications, bulk bone applications and serving as a tool for both mechanical testing and for further development of synthetic fiber reinforced composites.