994 resultados para Dissertation of vestibular exam
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
In a centrifugal compressor the flow around the diffuser is collected and led to the pipe system by a spiral-shaped volute. In this study a single-stage centrifugal compressor with three different volutes is investigated. The compressorwas first equipped with the original volute, the cross-section of which was a combination of a rectangle and semi-circle. Next a new volute with a fully circular cross-section was designed and manufactured. Finally, the circular volute wasmodified by rounding the tongue and smoothing the tongue area. The overall performance of the compressor as well as the static pressure distribution after the impeller and on the volute surface were measured. The flow entering the volute was measured using a three-hole Cobra-probe, and flow visualisations were carriedout in the exit cone of the volute. In addition, the radial force acting on theimpeller was measured using magnetic bearings. The complete compressor with thecircular volute (inlet pipe, full impeller, diffuser, volute and outlet pipe) was also modelled using computational fluid dynamics (CFD). A fully 3-D viscous flow was solved using a Navier-Stokes solver, Finflo, developed at Helsinki University of Technology. Chien's k-e model was used to take account of the turbulence. The differences observed in the performance of the different volutes were quite small. The biggest differences were at low speeds and high volume flows,i.e. when the flow entered the volute most radially. In this operating regime the efficiency of the compressor with the modified circular volute was about two percentage points higher than with the other volutes. Also, according to the Cobra-probe measurements and flow visualisations, the modified circular volute performed better than the other volutes in this operating area. The circumferential static pressure distribution in the volute showed increases at low flow, constant distribution at the design flow and decrease at high flow. The non-uniform static pressure distribution of the volute was transmitted backwards across the vaneless diffuser and observed at the impeller exit. At low volume flow a strong two-wave pattern developed into the static pressure distribution at the impeller exit due to the response of the impeller to the non-uniformity of pressure. The radial force of the impeller was the greatest at the choke limit, the smallest atthe design flow, and moderate at low flow. At low flow the force increase was quite mild, whereas the increase at high flow was rapid. Thus, the non-uniformityof pressure and the force related to it are strong especially at high flow. Theforce caused by the modified circular volute was weaker at choke and more symmetric as a function of the volume flow than the force caused by the other volutes.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The simple single-ion activity coefficient equation originating from the Debye-Hückel theory was used to determine the thermodynamic and stoichiometric dissociation constants of weak acids from data concerning galvanic cells. Electromotive force data from galvanic cells without liquid junctions, which was obtained from literature, was studied in conjuction with the potentiometric titration data relating to aqueous solutions at 298.15 K. The dissociation constants of weak acids could be determined by the presented techniques and almost all the experimental data studied could be interpreted within the range of experimental error. Potentiometric titration has been used here and the calculation methods were developed to obtain the thermodynamic and stoichiometric dissociation constants of some weak acids in aqueous solutions at 298.15 K. The ionic strength of titrated solutions were adjusted using an inert electrolyte, namely, sodium or potassium chloride. Salt content alonedetermines the ionic strength. The ionic strength of the solutions studied varied from 0.059 mol kg-1 to 0.37 mol kg-1, and in some cases up to 1.0 mol kg-1. The following substances were investigated using potentiometric titration: aceticacid, propionic acid, L-aspartic acid, L-glutamic acid and bis(2,2-dimethyl-3-oxopropanol) amine.
Resumo:
Cardiac failure is one of the leading causes of mortality in developed countries. As life expectancies of the populations of these countries grow, the number of patients suffering from cardiac insufficiency also increase. Effective treatments including the use of calcium sensitisers are being sought. They cause a positive inodilatory effect on cardio-myocytes without deleterious effects (arrhythmias) resulting from increases in intracellular calcium concentration. Levosimendan is a novel calcium sensitiser that hasbeen proved to be a welltolerated and effective treatment for patients with severe decompensated heart failure. Cardiac troponin C (cTnC) is its target protein. However, there have been controversies about the interactions between levosimendan and cTnC. Some of these controversies have been addressed in this dissertation. Furthermore, studies on the calcium sensitising mechanism based on the interactions between levosimendan and cTnC as followed by nuclear magnetic resonance(NMR) are presented and discussed. Levosimendan was found to interact with bothdomains of the calcium-saturated cTnC in the absence of cardiac troponin I (cTnI). In the presence of cTnI, the C-domain binding site was blocked and levosimendan interacted only with the regulatory domain of cTnC. This interaction may have caused the observed calcium sensitising effect by priming the N-domain for cTnI binding thereby extending the lifetime of that complex. It is suggested that this is achieved by shifting the equilibrium between open and closed conformations.
Resumo:
In this thesis, the magnetic field control of convection instabilities and heat and mass transfer processesin magnetic fluids have been investigated by numerical simulations and theoretical considerations. Simulation models based on finite element and finite volume methods have been developed. In addition to standard conservation equations, themagnetic field inside the simulation domain is calculated from Maxwell equations and the necessary terms to take into account for the magnetic body force and magnetic dissipation have been added to the equations governing the fluid motion.Numerical simulations of magnetic fluid convection near the threshold supportedexperimental observations qualitatively. Near the onset of convection the competitive action of thermal and concentration density gradients leads to mostly spatiotemporally chaotic convection with oscillatory and travelling wave regimes, previously observed in binary mixtures and nematic liquid crystals. In many applications of magnetic fluids, the heat and mass transfer processes including the effects of external magnetic fields are of great importance. In addition to magnetic fluids, the concepts and the simulation models used in this study may be applied also to the studies of convective instabilities in ordinary fluids as well as in other binary mixtures and complex fluids.
Resumo:
Tiedon jakaminen ja kommunikointi ovat tärkeitä toimintoja verkostoituneiden yritysten välillä ja ne käsitetäänkin yhteistyösuhteen yhtenä menestystekijänä ja kulmakivenä. Tiedon jakamiseen liittyviä haasteita ovat mm. yrityksen liiketoiminnalle kriittisen tiedon vuotaminen ja liiketoiminnan vaatima tiedon reaaliaikaisuus ja riittävä määrä. Tuotekehitysyhteistyössä haasteellista on tiedon jäsentymättömyys ja sitä kautta lisääntyvä tiedon jakamisen tarve, minkä lisäksi jaettava tieto on usein monimutkaista ja yksityiskohtaista. Lisäksi tuotteiden elinkaaret lyhenevät, ja ulkoistaminen ja yhteistyö ovat yhä kasvavia trendejä liiketoiminnassa. Yhdessä nämä tekijät johtavat siihen, että tiedon jakaminen on haastavaa eritoten verkostoituneiden yritysten välillä. Tässä tutkimuksessa tiedon jakamisen haasteisiin pyrittiin vastaamaan ottamalla lähtökohdaksi tiedon jakamisen tilanneriippuvuuden ymmärtäminen. Työssä vastattiin kahteen pääkysymykseen: Mikä on tiedon jakamisen tilanneriippuvuus ja miten sitä voidaan hallita? Tilanneriippuvuudella tarkoitetaan työssä niitä tekijöitä, jotka vaikuttavat siihen, miten yritys jakaa tietoa tuotekehityskumppaneidensa kanssa. Tiedon jakamisella puolestaan tarkoitetaan yrityksestä toiselle siirrettävää tietoa, jota tarvitaan tuotekehitysprojektin aikana. Työn empiirinen aineisto on kerätty laadullisella tutkimusotteella case- eli tapaustutkimuksena yhdessä telekommunikaatioalan yrityksessä jasen eri liiketoimintayksiköissä. Tutkimusjoukko käsitti 19 tuotekehitys- ja toimittajanhallintatehtävissä toimivaa johtajaa tai päällikköä. Työ nojaa pääasiassa hankintojen johtamisen tutkimuskenttään ja tilanneriippuvuuden selvittämiseksi paneuduttiin erityisesti verkostojen tutkimukseen. Työssä kuvattiin tiedon jakaminen yhtenä verkoston toimintona ja yhteistyöhön liittyvättiedon jakamisen hyödyt, haasteet ja riskit identifioitiin. Tämän lisäksi työssä kehitettiin verkoston tutkimismalleja ja yhdistettiin eri tasoilla tapahtuvaa verkoston tutkimusta. Työssä esitettiin malli verkoston toimintojen tutkimiseksija todettiin, että verkostotutkimusta pitäisi tehdä verkosto, ketju, yrityssuhde- ja yritystasolla. Malliin on myös hyvä yhdistää tuote- ja tehtäväkohtaiset ominaispiirteet. Kirjallisuuskatsauksen perusteella huomattiin, että tiedon jakamista on aiemmin tarkasteltu lähinnä tuote- ja yrityssuhteiden tasolla. Väitöskirjassa esitettiin lisää merkittäviä tekijöitä, jotka vaikuttavat tiedon jakamiseen. Näitä olivat mm. tuotekehitystehtävän luonne, teknologia-alueen kypsyys ja toimittajan kyvykkyys. Tiedon jakamisen luonnetta tarkasteltaessa erotettiin operatiivinen, projektin hallintaan ja tuotekehitykseen liittyvä tieto sekä yleinen, toimittajan hallintaan liittyvä strateginen tieto. Tulosten mukaan erityisesti tuotekehityksen määrittelyvaihe ja tapaamiset kasvotusten korostuivat yhteistyössä. Empirian avulla tutkittiin myös niitä tekijöitä, joilla tiedon jakamista voidaan hallita tilanneriippuvuuteen perustuen, koska aiemmin tiedon jakamisen hallintakeinoja tai menestystekijöitä ei ole liitetty suoranaisesti eri olosuhteisiin. Nämä hallintakeinot jaettiin yhteistyötason- ja tuotekehitysprojektitason tekijöihin. Yksi työn keskeisistä tuloksista on se, että huolimatta tiedon jakamisen haasteista, monet niistä voidaan eliminoida tunnistamalla vallitsevat olosuhteet ja panostamalla tiedon jakamisen hallintakeinoihin. Työn manageriaalinen hyöty koskee erityisesti yrityksiä, jotka suunnittelevat ja tekevät tuotekehitysyhteistyötä yrityskumppaniensa kanssa. Työssä esitellään keinoja tämän haasteellisen tehtäväkentän hallintaan ja todetaan, että yritysten pitäisikin kiinnittää entistä enemmän huomiota tiedon jakamisen ja kommunikaation hallintaan jo tuotekehitysyhteistyötä suunniteltaessa.
Resumo:
Crystal growth is an essential phase in crystallization kinetics. The rate of crystal growth provides significant information for the design and control of crystallization processes; nevertheless, obtaining accurate growth rate data is still challenging due to a number of factors that prevail in crystal growth. In industrial crystallization, crystals are generally grown from multi-componentand multi-particle solutions under complicated hydrodynamic conditions; thus, it is crucial to increase the general understanding of the growth kinetics in these systems. The aim of this work is to develop a model of the crystal growth rate from solution. An extensive literature review of crystal growth focuses on themodelling of growth kinetics and thermodynamics, and new measuring techniques that have been introduced in the field of crystallization. The growth of a singlecrystal is investigated in binary and ternary systems. The binary system consists of potassium dihydrogen phosphate (KDP, crystallizing solute) and water (solvent), and the ternary system includes KDP, water and an organic admixture. The studied admixtures, urea, ethanol and 1-propanol, are employed at relatively highconcentrations (of up to 5.0 molal). The influence of the admixtures on the solution thermodynamics is studied using the Pitzer activity coefficient model. Theprediction method of the ternary solubility in the studied systems is introduced and verified. The growth rate of the KDP (101) face in the studied systems aremeasured in the growth cell as a function of supersaturation, the admixture concentration, the solution velocity over a crystal and temperature. In addition, the surface morphology of the KDP (101) face is studied using ex situ atomic force microscopy (AFM). The crystal growth rate in the ternary systems is modelled on the basis of the two-step growth model that contains the Maxwell-Stefan (MS) equations and a surface-reaction model. This model is used together with measuredcrystal growth rate data to develop a new method for the evaluation of the model parameters. The validation of the model is justified with experiments. The crystal growth rate in an imperfectly mixed suspension crystallizer is investigatedusing computational fluid dynamics (CFD). A solid-liquid suspension flow that includes multi-sized particles is described by the multi-fluid model as well as by a standard k-epsilon turbulence model and an interface momentum transfer model. The local crystal growth rate is determined from calculated flow information in a diffusion-controlled crystal growth regime. The calculated results are evaluated experimentally.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Belt-drive systems have been and still are the most commonly used power transmission form in various applications of different scale and use. The peculiar features of the dynamics of the belt-drives include highly nonlinear deformation,large rigid body motion, a dynamical contact through a dry friction interface between the belt and pulleys with sticking and slipping zones, cyclic tension of the belt during the operation and creeping of the belt against the pulleys. The life of the belt-drive is critically related on these features, and therefore, amodel which can be used to study the correlations between the initial values and the responses of the belt-drives is a valuable source of information for the development process of the belt-drives. Traditionally, the finite element models of the belt-drives consist of a large number of elements thatmay lead to computational inefficiency. In this research, the beneficial features of the absolute nodal coordinate formulation are utilized in the modeling of the belt-drives in order to fulfill the following requirements for the successful and efficient analysis of the belt-drive systems: the exact modeling of the rigid body inertia during an arbitrary rigid body motion, the consideration of theeffect of the shear deformation, the exact description of the highly nonlinear deformations and a simple and realistic description of the contact. The use of distributed contact forces and high order beam and plate elements based on the absolute nodal coordinate formulation are applied to the modeling of the belt-drives in two- and three-dimensional cases. According to the numerical results, a realistic behavior of the belt-drives can be obtained with a significantly smaller number of elements and degrees of freedom in comparison to the previously published finite element models of belt-drives. The results of theexamples demonstrate the functionality and suitability of the absolute nodal coordinate formulation for the computationally efficient and realistic modeling ofbelt-drives. This study also introduces an approach to avoid the problems related to the use of the continuum mechanics approach in the definition of elastic forces on the absolute nodal coordinate formulation. This approach is applied to a new computationally efficient two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation. The proposed beam element uses a linear displacement field neglecting higher-order terms and a reduced number of nodal coordinates, which leads to fewer degrees of freedom in a finite element.
Resumo:
In this thesis the membrane filtration equipment for plate type ceramic membranes was developed based on filtration results achieved with different kinds of wastewaters. The experiments were mainly made with pulp and board mill wastewaters, but some experiments were also made with a bore well water and a stone cutting mine wastewater. The ceramicmembranes used were alpha-alumina membranes with a pore size of 100 nm. Some ofthe membranes were coated with a gamma-alumina layer to reduce the membrane pore size to 10 nm, and some of them were modified with different metal oxides in order to change the surface properties of the membranes. The effects of operationparameters, such as cross-flow velocity, filtration pressure and backflushing on filtration performance were studied. The measured parameters were the permeateflux, the quality of the permeate, as well as the fouling tendency of the membrane. A dynamic membrane or a cake layer forming on top of the membrane was observed to decrease the flux and increase separa-tion of certain substances, especially at low cross-flow velocities. When the cross-flow velocities were increased the membrane properties became more important. Backflushing could also be used to decrease the thickness of the cake layer and thus it improved the permeate flux. However, backflushing can lead to a reduction of retentions in cases where the cake layer is improving them. The wastewater quality was important for the thickness of the dynamic membrane and the membrane pore size influenced the permeate flux. In general, the optimization of operation conditions is very important for the successful operation of a membrane filtration system. The filtration equipment with a reasonable range of operational conditions is necessary, especiallywhen different kinds of wastewaters are treated. This should be taken into account already in the development stage of a filtration equipment.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
This thesis examines coordination of systems development process in a contemporary software producing organization. The thesis consists of a series of empirical studies in which the actions, conceptions and artifacts of practitioners are analyzed using a theory-building case study research approach. The three phases of the thesis provide empirical observations on different aspects of systemsdevelopment. In the first phase is examined the role of architecture in coordination and cost estimation in multi-site environment. The second phase involves two studies on the evolving requirement understanding process and how to measure this process. The third phase summarizes the first two phases and concentrates on the role of methods and how practitioners work with them. All the phases provide evidence that current systems development method approaches are too naïve in looking at the complexity of the real world. In practice, development is influenced by opportunity and other contingent factors. The systems development processis not coordinated using phases and tasks defined in methods providing universal mechanism for managing this process like most of the method approaches assume.Instead, the studies suggest that managing systems development process happens through coordinating development activities using methods as tools. These studies contribute to the systems development methods by emphasizing the support of communication and collaboration between systems development participants. Methods should not describe the development activities and phases in a detail level, butshould include the higher level guidance for practitioners on how to act in different systems development environments.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.