56 resultados para economical estimation
Resumo:
Woven monofilament, multifilament, and spun yarn filter media have long been the standard media in liquid filtration equipment. While the energy for a solid-liquid separation process is determined by the engineering work, it is the interface between the slurry and the equipment - the filter media - that greatly affects the performance characteristics of the unit operation. Those skilled in the art are well aware that a poorly designed filter medium may endanger the whole operation, whereas well-performing filter media can make the operation smooth and economical. As the mineral and pulp producers seek to produce ever finer and more refined fractions of their products, it is becoming increasingly important to be able to dewater slurries with average particle sizes around 1 ¿m using conventional, high-capacity filtration equipment. Furthermore, the surface properties of the media must not allow sticky and adhesive particles to adhere to the media. The aim of this thesis was to test how the dirt-repellency, electrical resistance and highpressure filtration performance of selected woven filter media can be improved by modifying the fabric or yarn with coating, chemical treatment and calendering. The results achieved by chemical surface treatments clearly show that the woven media surface properties can be modified to achieve lower electrical resistance and improved dirt-repellency. The main challenge with the chemical treatments is the abrasion resistance and, while the experimental results indicate that the treatment is sufficiently permanent to resist standard weathering conditions, they may still prove to be inadequately strong in terms of actual use.From the pressure filtration studies in this work, it seems obvious that the conventional woven multifilament fabrics still perform surprisingly well against the coated media in terms of filtrate clarity and cake build-up. Especially in cases where the feed slurry concentration was low and the pressures moderate, the conventional media seemed to outperform the coated media. In the cases where thefeed slurry concentration was high, the tightly woven media performed well against the monofilament reference fabrics, but seemed to do worse than some of the coated media. This result is somewhat surprising in that the high initial specific resistance of the coated media would suggest that the media will blind more easily than the plain woven media. The results indicate, however, that it is actually the woven media that gradually clogs during the coarse of filtration. In conclusion, it seems obvious that there is a pressure limit above which the woven media looses its capacity to keep the solid particles from penetrating the structure. This finding suggests that for extreme pressures the only foreseeable solution is the coated fabrics supported by a strong enough woven fabric to hold thestructure together. Having said that, the high pressure filtration process seems to follow somewhat different laws than the more conventional processes. Based on the results, it may well be that the role of the cloth is most of all to support the cake, and the main performance-determining factor is a long life time. Measuring the pore size distribution with a commercially available porometer gives a fairly accurate picture of the pore size distribution of a fabric, but failsto give insight into which of the pore sizes is the most important in determining the flow through the fabric. Historically air, and sometimes water, permeability measures have been the standard in evaluating media filtration performance including particle retention. Permeability, however, is a function of a multitudeof variables and does not directly allow the estimation of the effective pore size. In this study a new method for estimating the effective pore size and open pore area in a densely woven multifilament fabric was developed. The method combines a simplified equation of the electrical resistance of fabric with the Hagen-Poiseuille flow equation to estimate the effective pore size of a fabric and the total open area of pores. The results are validated by comparison to the measured values of the largest pore size (Bubble point) and the average pore size. The results show good correlation with measured values. However, the measured and estimated values tend to diverge in high weft density fabrics. This phenomenon is thought to be a result of a more tortuous flow path of denser fabrics, and could most probably be cured by using another value for the tortuosity factor.
Resumo:
In this thesis membrane filtration of paper machnie clear filtrate was studied. The aim of the study was to find membrane processes which are able to produce economically water of sufficient purity from paper machine white water or its saveall clarified fractions for reuse in the paper machnie short circulation. Factors affecting membrane fouling in this application were also studied. The thesis gives an overview af experiments done on a laboratory and a pilot scale with several different membranes and membrane modules. The results were judged by the obtained flux, the fouling tendency and the permeate quality assessed with various chemical analyses. It was shown that membrane modules which used a turbulence promotor of some kind gave the highest fluexes. However, the results showed that the greater the reduction in the concentration polarisation layer caused by increased turbulence in the module, the smaller the reductions in measured substances. Out of the micro-, ultra- and nanofiltration membranes tested, only nanofiltration memebranes produced permeate whose quality was very close to that of the chemically treated raw water used as fresh water in most paper mills today and which should thus be well suited for reuse as shower water both in the wire and press section. It was also shown that a one stage nanofiltration process was more effective than processes in which micro- or ultrafiltration was used as pretreatment for nanofiltration. It was generally observed that acidic pH, high organic matter content, the presence of multivalent ions, hydrophobic membrane material and high membrane cutoff increased the fouling tendency of the membranes.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Diplomityön tavoitteena on selvittää viiden uuden yhdistetyn sähkön- ja lämmöntuotantoprosessin taloudellisuus Suomessa ja Keski-Euroopassa. Polttoaineena käytetään maakaasua. Tehoalue on 1 kW – 1 MW. Työn alkuosassa on kerrottu kyseisten prosessien tekniset ominaisuudet ja käyttö tällä hetkellä. Työn loppuosassa on laskettu jokaisen prosessin taloudellisuus viidessä eri maassa (Suomessa, Alankomaassa, Belgiassa, Iso-Britanniassa ja Saksassa) viidessä eri skenaariossa (omakotitalot, rivitalot, kerrostalot, pienteollisuus ja keskisuuri teollisuus). Toisena aihepiirinä diplomityössä on verrattu kompressori- ja absorptiojäähdyttimiä keskenään ja kerrottu kaukokylmän tekemisestä. Työn tuloksena saatiin arvio siitä, miten taloudellisesti kannattavia kyseiset prosessit ovat tällä hetkellä verrattuna perinteisiin sähkön- ja lämmöntuotantotapoihin.
Resumo:
Työn tavoitteena oli kehittää tutkittavan insinööriyksikön projektien kustannusestimointiprosessia, siten että yksikön johdolla olisi tulevaisuudessa käytettävänään tarkempaa kustannustietoa. Jotta tämä olisi mahdollista, ensin täytyi selvittää yksikön toimintatavat, projektien kustannusrakenteet sekä kustannusatribuutit. Tämän teki mahdolliseksi projektien kustannushistoriatiedon tutkiminen sekä asiantuntijoiden haastattelu. Työn tuloksena syntyi kohdeyksikön muiden prosessien kanssa yhteensopiva kustannusestimointiprosessi sekä –malli.Kustannusestimointimenetelmän ja –mallin perustana on kustannusatribuutit, jotka määritellään erikseen tutkittavassa ympäristössä. Kustannusatribuutit löydetään historiatietoa tutkimalla, eli analysoimalla jo päättyneitä projekteja, projektien kustannusrakenteita sekä tekijöitä, jotka ovat vaikuttaneet kustannusten syntyyn. Tämän jälkeen kustannusatribuuteille täytyy määritellä painoarvot sekä painoarvojen vaihteluvälit. Estimointimallin tarkuutta voidaan parantaa mallin kalibroinnilla. Olen käyttänyt Goal – Question – Metric (GQM) –menetelmää tutkimuksen kehyksenä.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
The paper is focused on feasibility study and market review of small scale bioenergy heating plants in the Russian North-West region. The main focus is effective and competitive usage of low-grade wood for heating purposes in the region. As example of economical feasibility estimation it was chosen the project of reconstruction of small scale boiler plant in Leningrad region that Brofta Oy is planning to implement the nearest time. It includes calculation the payback time with and without interest, the estimation of probable investments, the evaluation of possible risks and research on the potential of small scale heating plants projects. Calculations show that the profitability of this kind of projects is high, but payback time is not very short, because of high level of initial investments. Though, the development of small scale bioenergy heating plants in the region is considered to be the best way to solve the problems of heat supply in small settlements using own biomass resources.
Resumo:
Cost estimation is an important, but challenging process when designing a new product or a feature of it, verifying the product prices given by suppliers or planning a cost saving actions of existing products. It is even more challenging when the product is highly modular, not a bulk product. In general, cost estimation techniques can be divided into two main groups - qualitative and quantitative techniques - which can further be classified into more detailed methods. Generally, qualitative techniques are preferable when comparing alternatives and quantitative techniques when cost relationships can be found. The main objective of this thesis was to develop a method on how to estimate costs of internally manufactured and commercial elevator landing doors. Because of the challenging product structure, the proposed cost estimation framework is developed under three different levels based on past cost information available. The framework consists of features from both qualitative and quantitative cost estimation techniques. The starting point for the whole cost estimation process is an unambiguous, hierarchical product structure so that the product can be classified into controllable parts and is then easier to handle. Those controllable parts can then be compared to existing past cost knowledge of similar parts and create as accurate cost estimates as possible by that way.
Resumo:
The target of the thesis was to find out has the decision to outsource part of Filtronic LK warehouse function been profitable. Furthermore, another thesis target was to demonstrate current logistics processes between TPLP and company and find out the targets for developing these processes. The decision to outsource part of logistical funtions have been profitable during the first business year. Partnership includes always business risks. Risk increases high asset specific investments. In the other hand investment to partnership increases mutual trust and commitment between parties. By developing partnership risks and opportunitic behaviour can be decreased. The potential of managing material and data flows between logistic service provider and company observed. By analyzing inventory effiency were highlighted the need for decreasing the capital invested to inventories. The recommendations for managing outsourced logistical funtions were established such as improving partnership, process development, performance measurement and invoice checking.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
In the current economy situation companies try to reduce their expenses. One of the solutions is to improve the energy efficiency of the processes. It is known that the energy consumption of pumping applications range from 20 up to 50% of the energy usage in the certain industrial plants operations. Some studies have shown that 30% to 50% of energy consumed by pump systems could be saved by changing the pump or the flow control method. The aim of this thesis is to create a mobile measurement system that can calculate a working point position of a pump drive. This information can be used to determine the efficiency of the pump drive operation and to develop a solution to bring pump’s efficiency to a maximum possible value. This can allow a great reduction in the pump drive’s life cycle cost. In the first part of the thesis, a brief introduction in the details of pump drive operation is given. Methods that can be used in the project are presented. Later, the review of available platforms for the project implementation is given. In the second part of the thesis, components of the project are presented. Detailed description for each created component is given. Finally, results of laboratory tests are presented. Acquired results are compared and analyzed. In addition, the operation of created system is analyzed and suggestions for the future development are given.