1000 resultados para väitöskirja


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and apermanent magnet synchronous motor. The suitability of these two motor types indynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The future of high technology welded constructions will be characterised by higher strength materials and improved weld quality with respect to fatigue resistance. The expected implementation of high quality high strength steel welds will require that more attention be given to the issues of crack initiation and mechanical mismatching. Experiments and finite element analyses were performed within the framework of continuum damage mechanics to investigate the effect of mismatching of welded joints on void nucleation and coalescence during monotonic loading. It was found that the damage of undermatched joints mainly occurred in the sandwich layer and the damageresistance of the joints decreases with the decrease of the sandwich layer width. The damage of over-matched joints mainly occurred in the base metal adjacent to the sandwich layer and the damage resistance of the joints increases with thedecrease of the sandwich layer width. The mechanisms of the initiation of the micro voids/cracks were found to be cracking of the inclusions or the embrittled second phase, and the debonding of the inclusions from the matrix. Experimental fatigue crack growth rate testing showed that the fatigue life of under-matched central crack panel specimens is longer than that of over-matched and even-matched specimens. Further investigation by the elastic-plastic finite element analysis indicated that fatigue crack closure, which originated from the inhomogeneousyielding adjacent to the crack tip, played an important role in the fatigue crack propagation. The applicability of the J integral concept to the mismatched specimens with crack extension under cyclic loading was assessed. The concept of fatigue class used by the International Institute of Welding was introduced in the parametric numerical analysis of several welded joints. The effect of weld geometry and load condition on fatigue strength of ferrite-pearlite steel joints was systematically evaluated based on linear elastic fracture mechanics. Joint types included lap joints, angle joints and butt joints. Various combinations of the tensile and bending loads were considered during the evaluation with the emphasis focused on the existence of both root and toe cracks. For a lap joint with asmall lack-of-penetration, a reasonably large weld leg and smaller flank angle were recommended for engineering practice in order to achieve higher fatigue strength. It was found that the fatigue strength of the angle joint depended strongly on the location and orientation of the preexisting crack-like welding defects, even if the joint was welded with full penetration. It is commonly believed that the double sided butt welds can have significantly higher fatigue strength than that of a single sided welds, but fatigue crack initiation and propagation can originate from the weld root if the welding procedure results in a partial penetration. It is clearly shown that the fatigue strength of the butt joint could be improved remarkably by ensuring full penetration. Nevertheless, increasing the fatigue strength of a butt joint by increasing the size of the weld is an uneconomical alternative.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the strategy implementation process of enterprices; a process whichhas lacked the academic attentioon compared with a rich strategy formation research trdition. Strategy implementation is viewed as a process ensuring tha the strtegies of an organisation are realised fully and quickly, yet with constant consideration of changing circumstances. The aim of this sudy is to provide a framework for identifying, analysing and removing the strategy implementation bottleneck af an organization and thus for intesifying its strategy process.The study is opened by specifying the concept, tasks and key actors of strategy implementation process; especially arguments for the critical implementation role of the top management are provided. In order to facilitate the analysis nad synthetisation of the core findings of scattered doctrine, six characteristic approaches to strategy implementation phenomenon are identified and compared. The Bottleneck Framework is introduced as an instrument for arranging potential strategy realisation problems, prioritising an organisation's implementation obstacles and focusing the improvement measures accordingly. The SUCCESS Framework is introduced as a mnemonic of the seven critical factors to be taken into account when promoting sttrategy implementation. Both frameworks are empirically tested by applying them to real strategy implementation intesification process in an international, industrial, group-structured case enterprise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, equations for the calculation of erosion wear caused by ash particles on convective heat exchanger tubes of steam boilers are presented. Anew, three-dimensional test arrangement was used in the testing of the erosion wear of convective heat exchanger tubes of steam boilers. When using the sleeve-method, three different tube materials and three tube constructions could be tested. New results were obtained from the analyses. The main mechanisms of erosionwear phenomena and erosion wear as a function of collision conditions and material properties have been studied. Properties of fossil fuels have also been presented. When burning solid fuels, such as pulverized coal and peat in steam boilers, most of the ash is entrained by the flue gas in the furnace. In bubbling andcirculating fluidized bed boilers, particle concentration in the flue gas is high because of bed material entrained in the flue gas. Hard particles, such as sharp edged quartz crystals, cause erosion wear when colliding on convective heat exchanger tubes and on the rear wall of the steam boiler. The most important ways to reduce erosion wear in steam boilers is to keep the velocity of the flue gas moderate and prevent channelling of the ash flow in a certain part of the cross section of the flue gas channel, especially near the back wall. One can do this by constructing the boiler with the following components. Screen plates can beused to make the velocity and ash flow distributions more even at the cross-section of the channel. Shield plates and plate type constructions in superheaters can also be used. Erosion testing was conducted with three types of tube constructions: a one tube row, an inline tube bank with six tube rows, and a staggered tube bank with six tube rows. Three flow velocities and two particle concentrations were used in the tests, which were carried out at room temperature. Three particle materials were used: quartz, coal ash and peat ash particles. Mass loss, diameter loss and wall thickness loss measurements of the test sleeves were taken. Erosion wear as a function of flow conditions, tube material and tube construction was analyzed by single-variable linear regression analysis. In developing the erosion wear calculation equations, multi-variable linear regression analysis was used. In the staggered tube bank, erosion wear had a maximum value in a tube row 2 and a local maximum in row 5. In rows 3, 4 and 6, the erosion rate was low. On the other hand, in the in-line tube bank the minimum erosion rate occurred in tube row 2 and in further rows the erosion had an increasing value, so that in a six row tube bank, the maximum value occurred in row 6.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of industrial crystallization is to obtain a crystalline product which has the desired crystal size distribution, mean crystal size, crystal shape, purity, polymorphic and pseudopolymorphic form. Effective control of the product quality requires an understanding of the thermodynamics of the crystallizing system and the effects of operation parameters on the crystalline product properties. Therefore, obtaining reliable in-line information about crystal properties and supersaturation, which is the driving force of crystallization, would be very advantageous. Advanced techniques, such asRaman spectroscopy, attenuated total reflection Fourier transform infrared (ATR FTIR) spectroscopy, and in-line imaging techniques, offer great potential for obtaining reliable information during crystallization, and thus giving a better understanding of the fundamental mechanisms (nucleation and crystal growth) involved. In the present work, the relative stability of anhydrate and dihydrate carbamazepine in mixed solvents containing water and ethanol were investigated. The kinetics of the solvent mediated phase transformation of the anhydrate to hydrate in the mixed solvents was studied using an in-line Raman immersion probe. The effects of the operation parameters in terms of solvent composition, temperature and the use of certain additives on the phase transformation kineticswere explored. Comparison of the off-line measured solute concentration and the solid-phase composition measured by in-line Raman spectroscopy allowedthe identification of the fundamental processes during the phase transformation. The effects of thermodynamic and kinetic factors on the anhydrate/hydrate phase of carbamazepine crystals during cooling crystallization were also investigated. The effect of certain additives on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) wasinvestigated. The crystal growth rate of a certain crystal face was determined from images taken with an in-line video microscope. An in-line image processing method was developed to characterize the size and shape of thecrystals. An ATR FTIR and a laser reflection particle size analyzer were used to study the effects of cooling modes and seeding parameters onthe final crystal size distribution of an organic compound C15. Based on the obtained results, an operation condition was proposed which gives improved product property in terms of increased mean crystal size and narrowersize distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkimuksen aiheeksi valittiin yrityksen strategiset valinnat eli ne näkökulmat ja keinot, joiden avulla yritys toimii selviytyäkseen kilpailussa. Erityisesti tutkittiin osaamisen ja yleensäkin muun kuin fyysisiin tuotteisiin liittyvien yrityksen ominaisuuksien ja toimintatapojen liittymistä yrityksen menestymiseen. Lisäksi tutkittiin yrityksissä noudatettuja liikkeenjohdon käytäntöjä ja menetelmiä suhteessa strategisiin valintoihin. Tutkimuksen kohdetoimialaksi valittiin oli leipomoala, joka on suomalaisen elintarviketeollisuuden toimipaikkojen lukumäärällä mitattuna runsaslukuisin yksittäinen toimiala. Tutkimus toteutettiin lomakekyselynä, johon vastasi 90 suomalaista leipomoalan yritystä, jotka työllistivät enintään 49 henkilöä. Ryhmittelyanalyysin avulla kohdeyrityksistä muodostettiin neljä yritysryhmää erityisesti dynaamisuuden, innovatiivisuuden ja kustannustehokkuuden suhteen. Tutkimustulosten perusteella saatiin empiiristä evidenssiä sen suhteen, että innovatiivisuus ja dynaamisuus pienten ja keskisuurten leipomoalan yritysten strategiavalintana tuo yritykselle lisää valmiuksia yrityksen toiminnan kehittämiseen, mikäli yrityksen toimintaympäristö, erityisesti kilpailuympäristö muuttuu. Liikkeenjohdon, erityisesti laskentatoimen, käytännöt ja menetelmät todettiin tutkimuksen tulosten perusteella merkityksellisiksi tekijöiksi, joiden kautta strategiset valinnat vaikuttavat yrityksen menestymiseen. Näitä käytäntöjä ja menetelmiä voidaan näin ollen pitää yrityksen tärkeänä voimavarana ja osaamisen ilmentymänä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Luokittelujärjestelmää suunniteltaessa tarkoituksena on rakentaa systeemi, joka pystyy ratkaisemaan mahdollisimman tarkasti tutkittavan ongelma-alueen. Hahmontunnistuksessa tunnistusjärjestelmän ydin on luokitin. Luokittelun sovellusaluekenttä on varsin laaja. Luokitinta tarvitaan mm. hahmontunnistusjärjestelmissä, joista kuvankäsittely toimii hyvänä esimerkkinä. Myös lääketieteen parissa tarkkaa luokittelua tarvitaan paljon. Esimerkiksi potilaan oireiden diagnosointiin tarvitaan luokitin, joka pystyy mittaustuloksista päättelemään mahdollisimman tarkasti, onko potilaalla kyseinen oire vai ei. Väitöskirjassa on tehty similaarisuusmittoihin perustuva luokitin ja sen toimintaa on tarkasteltu mm. lääketieteen paristatulevilla data-aineistoilla, joissa luokittelutehtävänä on tunnistaa potilaan oireen laatu. Väitöskirjassa esitetyn luokittimen etuna on sen yksinkertainen rakenne, josta johtuen se on helppo tehdä sekä ymmärtää. Toinen etu on luokittimentarkkuus. Luokitin saadaan luokittelemaan useita eri ongelmia hyvin tarkasti. Tämä on tärkeää varsinkin lääketieteen parissa, missä jo pieni tarkkuuden parannus luokittelutuloksessa on erittäin tärkeää. Väitöskirjassa ontutkittu useita eri mittoja, joilla voidaan mitata samankaltaisuutta. Mitoille löytyy myös useita parametreja, joille voidaan etsiä juuri kyseiseen luokitteluongelmaan sopivat arvot. Tämä parametrien optimointi ongelma-alueeseen sopivaksi voidaan suorittaa mm. evoluutionääri- algoritmeja käyttäen. Kyseisessä työssä tähän on käytetty geneettistä algoritmia ja differentiaali-evoluutioalgoritmia. Luokittimen etuna on sen joustavuus. Ongelma-alueelle on helppo vaihtaa similaarisuusmitta, jos kyseinen mitta ei ole sopiva tutkittavaan ongelma-alueeseen. Myös eri mittojen parametrien optimointi voi parantaa tuloksia huomattavasti. Kun käytetään eri esikäsittelymenetelmiä ennen luokittelua, tuloksia pystytään parantamaan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work was carried out in the laboratory of Fluid Dynamics, at Lappeenranta University of Technology during the years 1991-1996. The research was a part of larger high speed technology development research. First, there was the idea of making high speed machinery applications with the Brayton cycle. There was a clear need to deepen theknowledge of the cycle itself and to make a new approach in the field of the research. Also, the removal of water from the humid air seemed very interesting. The goal of this work was to study methods of designing high speed machinery to the reversed Brayton cycle, from theoretical principles to practical applications. The reversed Brayton cycle can be employed as an air dryer, a heat pump or a refrigerating machine. In this research the use of humid air as a working fluid has an environmental advantage, as well. A new calculation method for the Braytoncycle is developed. In this method especially the expansion process in the turbine is important because of the condensation of the water vapour in the humid air. This physical phenomena can have significant effects on the level of performance of the application. Also, the influence of calculating the process with actual, achievable process equipment efficiencies is essential for the development of the future machinery. The above theoretical calculations are confirmed with two different laboratory prototypes. The high speed machinery concept allows one to build an application with only one rotating shaft including all the major parts: the high speed motor, the compressor and the turbine wheel. The use of oil free bearings and high rotational speed outlines give several advantages compared to conventional machineries: light weight, compact structure, safe operation andhigher efficiency at a large operational region. There are always problems whentheory is applied to practice. The calibrations of pressure, temperature and humidity probes were made with care but still measurable errors were not negligible. Several different separators were examined and in all cases the content of the separated water was not exact. Due to the compact sizes and structures of the prototypes, the process measurement was slightly difficult. The experimental results agree well with the theoretical calculations. These experiments prove the operation of the process and lay a ground for the further development. The results of this work give very promising possibilities for the design of new, commercially competitive applications that use high speed machinery and the reversed Brayton cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work concerns the experimental study of rapid granular shear flows in annular Couette geometry. The flow is induced by continuous driving of the horizontal plate at the top of the granular bed in an annulus. The compressive pressure, driving torque, instantaneous bed height and rotational speed of the shearing plate are measured. Moreover, local stress fluctuations are measured in a medium made of steel spheres 2 and 3 mm in diameter. Both monodisperse packing and bidisperse packing are investigated to reveal the influence of size diversity in intermittent features of granular materials. Experiments are conducted in an annulus that can contain up to 15 kg of spherical steel balls. The shearing granular medium takes place via the rotation of the upper plate which compresses the material loaded inside the annulus. Fluctuations of compressive force are locally measured at the bottom of the annulus using a piezoelectric sensor. Rapid shear flow experiments are pursued at different compressive forces and shear rates and the sensitivity of fluctuations are then investigated by different means through monodisperse and bidisperse packings. Another important feature of rapid granular shear flows is the formation of ordered structures upon shearing. It requires a certain range for the amount of granular material (uniform size distribution) loaded in the system in order to obtain stable flows. This is studied more deeply in this thesis. The results of the current work bring some new insights into deformation dynamics and intermittency in rapid granular shear flows. The experimental apparatus is modified in comparison to earlier investigations. The measurements produce data for various quantities continuously sampled from the start of shearing to the end. Static failure and dynamic shearing ofa granular medium is investigated. The results of this work revealed some important features of failure dynamics and structure formation in the system. Furthermore, some computer simulations are performed in a 2D annulus to examine the nature of kinetic energy dissipation. It is found that turbulent flow models can statistically represent rapid granular flows with high accuracy. In addition to academic outcomes and scientific publications our results have a number of technological applications associated with grinding, mining and massive grain storages.