28 resultados para lower estimate

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän työn tarkoituksena oli selvittää mekaanisen massan valmistuksen käsittävän paperitehtaan rejektija jätevirtojen poltettavuutta, jos paperitehtaan vesikiertojen sulkemisastetta lisätään. Jotta prosessin tilannetta sulkemisen jälkeen saatiin arvioitua, Anjalan paperitehtaan nykypäivän PK3:n prosessia tutkittiin kuorimolta jätevesilaitokselle. Kirjallisuusosassa käsiteltiin rejekti- ja jätevirtojen alkuperää mekaanista massaa käyttävässä paperitehtaassa. Myös tämän päivän jätevedenkäsittelyprosessit sekä sulkemisessa mahdolliset prosessiveden puhdistustekniikat esiteltiin lyhyesti. Lisäksikäytiin läpi nykypäivänä metsäteollisuudessa käytössä olevat polttotekniikat sekä polttoaineiden karakterisointi kattilan käytettävyyden ja päästöjen kannalta. Anjalan PK3:lla käytetään sekä peroksidi- että ditioniittivalkaistua tai pelkästään ditioniittivalkaistua hioketta riippuen tuotannossa olevasta lajista. PK3-prosessissa syntyneet jätevesi-, liete- ja muut jätevirrat selvitettiin molemmissa valkaisuolosuhteissa. Prosessin eniten liuennutta orgaanista ainesta sisältävät jätevesijakeet, 3-hiomon kuumankierron ja kirkassuodoksen ulosajot sekä kuoripuristimen suodos, valittiin puhdistettaviksi virroiksi prosessin sulkemista arvioitaessa. Kun peroksidivalkaisua käytettiin 3-hiomolla, TOC-kuorma jokeen oli 30 % suurempi kuin pelkällä ditioniittivalkaisulla. Jos prosessin sulkemisastetta lisättäisiin, TOC-kuorma olisi 30 %pienempi kuin tänäpäivänä peroksidivalkaisua käytettäessä (80 % puhdistustehokkuudella). Prosessin sulkemisastetta lisättäessä biolietettä muodostuisi n. 30 % vähemmän verrattuna nykytilanteeseen, sillä mikrobien ravintona käyttämää orgaanista ainesta päätyisi vähemmän jäteveteen. 3-hiomon peroksidivalkaisun vaikutus kattilan käytettävyyteen ja päästöihin oli pieni, sillä biolietteen osuus polttoaineen syötöstä oli vain 4 %. Vain osa biolietteestä muodostui 3-hiomolta peräisin olevaa orgaanista ainesta poistettaessa. Jos nykyisen pääpolttoaineen, PDF:n,osuuden jättää huomioimatta, SO2- ja NOx-päästöt sekä leijupedin sintrautuvuus ovat hiukan suuremmat käytettäessä peroksidivalkaisua 3-hiomolla kuin pelkästään ditioniittivalkaisulla. Jos kuoripuristimen ja hiomon suodosten puhdistuksen konsentraatit johdetaan poltettaviksi BFB-tyyppiseen kattilaan, leijupedin sintrautuminen tulisi olemaan suurin ongelma. Myös raskasmetalli-, SO2- ja NOx-päästöt lisääntyisivät merkittävästi verrattuna nykyiseen tilanteeseen. Sen sijaan kattilan korroosioriski tuskin lisääntyisi. Lisäksi konsentraattien kosteuspitoisuus olisi korkea, mikä tekisi poltosta kannattamatonta veden haihdutuksen vaatiessa paljon energiaa. Yksityiskohtaisempaa tutkimusta tarvitaan vielä prosessin sulkemisen vaikutuksista päästöihin ja kattilan käytettävyyteen. Myös muita konsentraattien hävittämismahdollisuuksia tulisi tutkia lisää.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uimaveden klooridesinfioinnissa syntyy sivutuotteena haihtuvia ja haitallisia halogeeniyhdisteitä, kuten trihalometaaneja ja triklooriamiinia, jotka voivat heikentää allas-tilan sisäilman laatua merkittävästi. Tämän tutkimuksen tavoitteena oli kartoittaa näiden desinfioinnin sivutuotteiden pitoisuuksia suomalaisissa uimahalleissa sekä selvittää epäpuhtauksien kulkeutumista allastiloissa. Lisäksi pyrittiin löytämään merkittävimmät veden laatu- ja käsittelyparametrit sekä ilmanvaihtotekniset tekijät, jotka vaikuttavat vedestä haihtuvien epäpuhtauksien pitoisuuksiin hallitiloissa. Mittaukset tehtiin kymmenessä eri puolilla Suomea sijaitsevassa uimahallissa. Mittausten perusteella havaittiin, että allastilojen kloroformipitoisuudet vaihtelivat välillä 8,9-84,0 ¿g/m3. Terapia-allasostoilta mitatut pitoisuudet olivat pääallastiloista mitattuja pitoisuuksia suurempia ja aamulla mitattu pitoisuus alhaisempi kuin illalla mitattu. Lisäksi ilmastoiduista valvomoista mitatut pitoisuudet olivat merkittävästi allastilojen pitoisuuksia pienempiä. Triklooriamiininäytteistä suurin osa oli alle määritysrajan. Sisäilman kloroformipitoisuuden havaittiin korreloivan veden lämpötilan sekä ilman kosteuden kanssa. Triklooriamiinille tilastollista analyysiä ei voitu tehdä mm. määritys-rajan alle jääneiden näytteiden suuren osuuden vuoksi. Teknisten kyselyiden ja ilmanvaihtomittausten perusteella todettiin, että uimahallien ilmanvaihto toimii lämmitystarpeen vuoksi sekoittavana ja epäpuhtauksien leviämistä allastilassa ei voida käytännössä katsoen estää. Ilman virtausnopeudet pääaltaiden reunoilla olivat pieniä ja ilman virtauskenttien ei todettu vaikuttavan epäpuhtauksien kulkeutumiseen allastiloissa. Mittauskohteiden vähyydestä ja vedenkäsittelyn hallikohtaisista erityispiirteistä johtuen luotettavia johtopäätöksiä vedenkäsittelymenetelmien vaikutuksesta allasveden ja allastilan ilman laatuun ei pystytty tekemään.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Woven monofilament, multifilament, and spun yarn filter media have long been the standard media in liquid filtration equipment. While the energy for a solid-liquid separation process is determined by the engineering work, it is the interface between the slurry and the equipment - the filter media - that greatly affects the performance characteristics of the unit operation. Those skilled in the art are well aware that a poorly designed filter medium may endanger the whole operation, whereas well-performing filter media can make the operation smooth and economical. As the mineral and pulp producers seek to produce ever finer and more refined fractions of their products, it is becoming increasingly important to be able to dewater slurries with average particle sizes around 1 ¿m using conventional, high-capacity filtration equipment. Furthermore, the surface properties of the media must not allow sticky and adhesive particles to adhere to the media. The aim of this thesis was to test how the dirt-repellency, electrical resistance and highpressure filtration performance of selected woven filter media can be improved by modifying the fabric or yarn with coating, chemical treatment and calendering. The results achieved by chemical surface treatments clearly show that the woven media surface properties can be modified to achieve lower electrical resistance and improved dirt-repellency. The main challenge with the chemical treatments is the abrasion resistance and, while the experimental results indicate that the treatment is sufficiently permanent to resist standard weathering conditions, they may still prove to be inadequately strong in terms of actual use.From the pressure filtration studies in this work, it seems obvious that the conventional woven multifilament fabrics still perform surprisingly well against the coated media in terms of filtrate clarity and cake build-up. Especially in cases where the feed slurry concentration was low and the pressures moderate, the conventional media seemed to outperform the coated media. In the cases where thefeed slurry concentration was high, the tightly woven media performed well against the monofilament reference fabrics, but seemed to do worse than some of the coated media. This result is somewhat surprising in that the high initial specific resistance of the coated media would suggest that the media will blind more easily than the plain woven media. The results indicate, however, that it is actually the woven media that gradually clogs during the coarse of filtration. In conclusion, it seems obvious that there is a pressure limit above which the woven media looses its capacity to keep the solid particles from penetrating the structure. This finding suggests that for extreme pressures the only foreseeable solution is the coated fabrics supported by a strong enough woven fabric to hold thestructure together. Having said that, the high pressure filtration process seems to follow somewhat different laws than the more conventional processes. Based on the results, it may well be that the role of the cloth is most of all to support the cake, and the main performance-determining factor is a long life time. Measuring the pore size distribution with a commercially available porometer gives a fairly accurate picture of the pore size distribution of a fabric, but failsto give insight into which of the pore sizes is the most important in determining the flow through the fabric. Historically air, and sometimes water, permeability measures have been the standard in evaluating media filtration performance including particle retention. Permeability, however, is a function of a multitudeof variables and does not directly allow the estimation of the effective pore size. In this study a new method for estimating the effective pore size and open pore area in a densely woven multifilament fabric was developed. The method combines a simplified equation of the electrical resistance of fabric with the Hagen-Poiseuille flow equation to estimate the effective pore size of a fabric and the total open area of pores. The results are validated by comparison to the measured values of the largest pore size (Bubble point) and the average pore size. The results show good correlation with measured values. However, the measured and estimated values tend to diverge in high weft density fabrics. This phenomenon is thought to be a result of a more tortuous flow path of denser fabrics, and could most probably be cured by using another value for the tortuosity factor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Value chain collaboration has been a prevailing topic for research, and there is a constantly growing interest in developing collaborative models for improved efficiency in logistics. One area of collaboration is demand information management, which enables improved visibility and decrease of inventories in the value chain. Outsourcing of non-core competencies has changed the nature of collaboration from intra-enterprise to cross-enterprise activity, and this together with increasing competition in the globalizing markets have created a need for methods and tools for collaborative work. The retailer part in the value chain of consumer packaged goods (CPG) has been studied relatively widely, proven models have been defined, and there exist several best practice collaboration cases. The information and communications technology has developed rapidly, offering efficient solutions and applications to exchange information between value chain partners. However, the majority of CPG industry still works with traditional business models and practices. This concerns especially companies operating in the upstream of the CPG value chain. Demand information for consumer packaged goods originates at retailers' counters, based on consumers' buying decisions. As this information does not get transferred along the value chain towards the upstream parties, each player needs to optimize their part, causing safety margins for inventories and speculation in purchasing decisions. The safety margins increase with each player, resulting in a phenomenon known as the bullwhip effect. The further the company is from the original demand information source, the more distorted the information is. This thesis concentrates on the upstream parts of the value chain of consumer packaged goods, and more precisely the packaging value chain. Packaging is becoming a part of the product with informative and interactive features, and therefore is not just a cost item needed to protect the product. The upstream part of the CPG value chain is distinctive, as the product changes after each involved party, and therefore the original demand information from the retailers cannot be utilized as such – even if it were transferred seamlessly. The objective of this thesis is to examine the main drivers for collaboration, and barriers causing the moderate adaptation level of collaborative models. Another objective is to define a collaborative demand information management model and test it in a pilot business situation in order to see if the barriers can be eliminated. The empirical part of this thesis contains three parts, all related to the research objective, but involving different target groups, viewpoints and research approaches. The study shows evidence that the main barriers for collaboration are very similar to the barriers in the lower part of the same value chain; lack of trust, lack of business case and lack of senior management commitment. Eliminating one of them – the lack of business case – is not enough to eliminate the two other barriers, as the operational model in this thesis shows. The uncertainty of the future, fear of losing an independent position in purchasing decision making and lack of commitment remain strong enough barriers to prevent the implementation of the proposed collaborative business model. The study proposes a new way of defining the value chain processes: it divides the contracting and planning process into two processes, one managing the commercial parts and the other managing the quantity and specification related issues. This model can reduce the resistance to collaboration, as the commercial part of the contracting process would remain the same as in the traditional model. The quantity/specification-related issues would be managed by the parties with the best capabilities and resources, as well as access to the original demand information. The parties in between would be involved in the planning process as well, as their impact for the next party upstream is significant. The study also highlights the future challenges for companies operating in the CPG value chain. The markets are becoming global, with toughening competition. Also, the technology development will most likely continue with a speed exceeding the adaptation capabilities of the industry. Value chains are also becoming increasingly dynamic, which means shorter and more agile business relationships, and at the same time the predictability of consumer demand is getting more difficult due to shorter product life cycles and trends. These changes will certainly have an effect on companies' operational models, but it is very difficult to estimate when and how the proven methods will gain wide enough adaptation to become standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand forecasting is one of the fundamental managerial tasks. Most companies do not know their future demands, so they have to make plans based on demand forecasts. The literature offers many methods and approaches for producing forecasts. When selecting the forecasting approach, companies need to estimate the benefits provided by particular methods, as well as the resources that applying the methods call for. Former literature points out that even though many forecasting methods are available, selecting a suitable approach and implementing and managing it is a complex cross-functional matter. However, research that focuses on the managerial side of forecasting is relatively rare. This thesis explores the managerial problems that are involved when demand forecasting methods are applied in a context where a company produces products for other manufacturing companies. Industrial companies have some characteristics that differ from consumer companies, e.g. typically a lower number of customers and closer relationships with customers than in consumer companies. The research questions of this thesis are: 1. What kind of challenges are there in organizing an adequate forecasting process in the industrial context? 2. What kind of tools of analysis can be utilized to support the improvement of the forecasting process? The main methodological approach in this study is design science, where the main objective is to develop tentative solutions to real-life problems. The research data has been collected from two organizations. Managerial problems in organizing demand forecasting can be found in four interlinked areas: 1. defining the operational environment for forecasting, 2. defining the forecasting methods, 3. defining the organizational responsibilities, and 4. defining the forecasting performance measurement process. In all these areas, examples of managerial problems are described, and approaches for mitigating these problems are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lower extremity peripheral arterial disease (PAD) is associated with decreased functional status, diminished quality of life (QoL), amputation, myocardial infarction, stroke, and death. Nevertheless, public awareness of PAD as a morbid and mortal disease is low. The aim of this study was to assess the incidence of major lower extremity amputation due to PAD, the extent of reamputations, and survival after major lower extremity amputation (LEA) in a population based PAD patient cohort. Furthermore, the aim was to assess the functional capacity in patients with LEA, and the QoL after lower extremity revascularization and major amputation. All 210 amputees due to PAD in 1998–2002 and all 519 revascularized patients in 1998–2003 were explored. 59 amputees alive in 2004 were interviewed using a structured questionnaire of QoL. Two of each amputee age-, gender- and domicile-matched controls filled in and returned postal self-administered QoL questionnaire as well as 231 revascularized PAD patients (the amount of these patients who engaged themselves to the study), and one control person for each patient completed postal self-administered QoL questionnaire. The incidence rate of major LEA was 24.1/100 000 person-years and it was considerably high during the years studied. The one-month mortality rate was 21%, 52% at one-year, and the overall mortality rate was 80%. When comparing the one-year mortality risk of amputees, LEAs were associated with a 7.4-fold annual mortality risk compared with the reference population in Turku. Twenty-two patients (10%) had ipsilateral transversions from BK to AK amputation. Fifty patients (24%) ended up with a contralateral major LEA within two to four amputation operations. Three bilateral amputations were performed at the first major LEA operation. Of the 51 survivors returning home after their first major LEA, 36 (71%) received a prosthesis; (16/36, 44%) and were able to walk both in- and outdoors. Of the 68 patients who were discharged to institutional care, three (4%) had a prosthesis one year after LEA. Both amputees and revascularized patients had poor physical functioning and significantly more depressive symptoms than their controls. Depressive symptoms were more common in the institutionalized amputees than the home-dwelling amputees. The surviving amputees and their controls had similar life satisfaction. The amputees felt themselves satisfied and contented, whether or not they lived in long-term care or at home. PAD patients who had undergone revascularizations had poorer QoL than their controls. The revascularized patients’ responses on their perceived physical functioning gave an impression that these patients are in a declining life cycle and that revascularizations, even when successful, may not be sufficient to improve the overall function. It is possible that addressing rehabilitation issues earlier in the care may produce a more positive functional outcome. Depressive symptoms should be recognized and thoroughly considered at the same time the patients are recovering from their revascularization operation. Also primary care should develop proper follow-up, and community organizations should have exercise groups for those who are able to return home, since they very often live alone. In rehabilitation programs we should consider not only physical disability assessment but also QoL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työssä käytiin läpi Porvoon jalostamon haihtuvien orgaanisten yhdisteiden päästömääritysmenetelmiä ja arvioitiin nykyisin käytössä olevien sekä uusien menetelmien soveltuvuutta Porvoon jalostamon päästömääritykseen. Nykyisten menetelmien arviointia tehtiin käymällä läpi eri alueiden 2000-luvun päästömäärät sekä vertaamalla päästömääriä muiden jalostamojen vastaaviin päästömääriin. Haihtuvista orgaanisista yhdisteistä puhuttaessa jätetään yleisesti metaani määritelmän ulkopuolelle ja käytetään termiä NMVOC-yhdisteet. Työssä laskettiin arvio Porvoon jalostamon metaanin päästömäärälle ja arvioitiin sen vaikutusta NMVOC-kokonaispäästömäärään. Metaanin kokonaispäästömäärien havaittiin olevan noin kymmenen kertaa haihtuvien orgaanisten yhdisteiden päästömääriä pienempiä, ja näin ollen niiden lisäämisellä NMVOC-päästöihin ei ole juuri vaikutusta. Myös menetelmien investointi- ja käyttökustannuksia, sekä pidemmän aikavälin kustannuksia arvioitiin. Kustannuksiltaan tällä hetkellä Porvoon jalostamolla käytössä olevat menetelmät ovat kustannustehokkaita. Uusista menetelmistä DIAL, SOF ja OGI ovat kustannuksiltaan huomattavasti kalliimpia, myös pitkän aikavälin vertailulla. Nykyisten menetelmien vuosittaiset kustannukset aiheutuvat mittausten vaatimista henkilötyötunneista. Uusista menetelmistä SOF ja DIAL vaativat ulkopuolisten mittaajien käyttämistä. Massavirran määrityksen suhteen vielä kehitysvaiheessa olevalla OGI-kameralla mitatessa voidaan käyttää mittaajina omaa henkilökuntaa. Toisin kuin DIAL- ja SOF-menetelmien laitteistot, OGI-kamera ostetaan omaksi ja näin ollen sitä voidaan käyttää tarpeen vaatiessa vuoden ympäri esimerkiksi suurien vuotajien paikallistamiseen ja LDAR-kiristysohjelman tukena. Tarkastelun perusteella olisi suositeltavaa tarkastaa nykyisin käytettävistä laskentamenetelmistä erityisesti prosessi- ja säiliöalueen sekä jätevesijärjestelmä päästömäärät käyttäen tarkempia DIAL-, SOF- tai myöhemmin OGI-menetelmiä ja muokata laskentamenetelmiä vastamaan näillä määritettyjä päästömääriä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Centrifugal pumps are widely used in industrial and municipal applications, and they are an important end-use application of electric energy. However, in many cases centrifugal pumps operate with a significantly lower energy efficiency than they actually could, which typically has an increasing effect on the pump energy consumption and the resulting energy costs. Typical reasons for this are the incorrect dimensioning of the pumping system components and inefficiency of the applied pump control method. Besides the increase in energy costs, an inefficient operation may increase the risk of a pump failure and thereby the maintenance costs. In the worst case, a pump failure may lead to a process shutdown accruing additional costs. Nowadays, centrifugal pumps are often controlled by adjusting their rotational speed, which affects the resulting flow rate and output pressure of the pumped fluid. Typically, the speed control is realised with a frequency converter that allows the control of the rotational speed of an induction motor. Since a frequency converter can estimate the motor rotational speed and shaft torque without external measurement sensors on the motor shaft, it also allows the development and use of sensorless methods for the estimation of the pump operation. Still today, the monitoring of pump operation is based on additional measurements and visual check-ups, which may not be applicable to determine the energy efficiency of the pump operation. This doctoral thesis concentrates on the methods that allow the use of a frequency converter as a monitoring and analysis device for a centrifugal pump. Firstly, the determination of energy-efficiency- and reliability-based limits for the recommendable operating region of a variable-speed-driven centrifugal pump is discussed with a case study for the laboratory pumping system. Then, three model-based estimation methods for the pump operating location are studied, and their accuracy is determined by laboratory tests. In addition, a novel method to detect the occurrence of cavitation or flow recirculation in a centrifugal pump by a frequency converter is introduced. Its sensitivity compared with known cavitation detection methods is evaluated, and its applicability is verified by laboratory measurements for three different pumps and by using two different frequency converters. The main focus of this thesis is on the radial flow end-suction centrifugal pumps, but the studied methods can also be feasible with mixed and axial flow centrifugal pumps, if allowed by their characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.