59 resultados para SENSITIVITY PROBLEMS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erilaisia epäpuhtauksia kulkeutuu paperinvalmistusprosessiin ja monenlaisia saostumia muodostuu paperinvalmistuksen prosesseissa. Epäpuhtaudet voivat aiheuttaa prosessiongelmia sekä alentaa tuotteen laatua. Epäpuhtauksien alkuperän ja koostumuksen selvittäminen edellyttää usein erilaisten analyysimenetelmien käyttöä. Epäpuhtauksien luokittelu on useasti välttämätöntä ennen tarkempaa kemiallista analyysia. Paperinvalmistuksen epäpuhtauksien kvalitatiiviseen luokitteluun on yleisimmin käytetty mikroskopian, IR-spektroskopian ja analyyttisen pyrolyysin menetelmiä. Raman spektroskopia on harvinaisempi menetelmä paperiteollisuuden tutkimuksessa. Raman instrumenttien kehittyminen on ollut voimakasta viimeisen vuosikymmenen aikana. Raman spektroskopia onkin osoittanut mandollisuutensa polymeerien, lääketeollisuuden ja polttoaineteollisuuden tutkimuksissa. Tässä työssä tutkittiin erään elintarvikepakkauskartongin epäpuhtauksia Raman spektroskoopilla. Työn tavoitteena oli selvittää Raman analyysin käyttökelpoisuutta kartongin epäpuhtauksien online-luokittelussa. Tutkimukset suoritettiin Spectracoden RP-1 Raman instrumentilla. Tutkimukset osoittivat, että näytteen fluoresenssi ja näytteen hajoaminen asettavat rajoituksia epäpuhtauksien Raman analyysille. Epäpuhtauksien online-tunnistaminen toimii käytettäessä suuria lasertehoja ja säteilytysaikoja. Näytteiden laserherkkyys ja fluoresenssi rajoittavat kuitenkin suurien laiteparametrien käyttöä. Laiteparametrien pienentäminen johti mittauksien signaali-kohina suhteen alenemiseen, mikä puolestaan aiheutti online-tunnistuksen toimimattomuuden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alkyl ketene dimers (AKD) are effective and highly hydrophobic sizing agents for the internal sizing of alkaline papers, but in some cases they may form deposits on paper machines and copiers. In addition, alkenyl succinic anhydrides (ASA)- based sizing agents are highly reactive, producing on-machine sizing, but under uncontrolled wet end conditions the hydrolysis of ASA may cause problems. This thesis aims at developing an improved ketene dimer based sizing agent that would have a lower deposit formation tendency on paper machines and copiers than a traditional type of AKD. The aim is also to improve the ink jet printability of a AKD sized paper. The sizing characteristics ofketene dimers have been compared to those of ASA. A lower tendency of ketene dimer deposit formation was shown in paper machine trials and in printability tests when branched fatty acids were used in the manufacture of a ketene dimer basedsizing agent. Fitting the melting and solidification temperature of a ketene dimer size to the process temperature of a paper machine or a copier contributes to machine cleanliness. A lower hydrophobicity of the paper sized with branched ketene dimer compared to the paper sized with traditional AKD was discovered. However, the ink jet print quality could be improved by the use of a branched ketene dimer. The branched ketene dimer helps in balancing the paper hydrophobicity for both black and color printing. The use of a high amount of protective colloidin the emulsification was considered to be useful for the sizing performance ofthe liquid type of sizing agents. Similar findings were indicated for both the branched ketene dimer and ASA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä diplomityössä oli tavoitteena suunnitella ja toteuttaa verkkoliiketoiminnan tehokkuusmittauksen ohjausvaikutusten analysointijärjestelmä. Verkkoliiketoiminta on monopoliasemassa olevaa liiketoimintaa, jossa ei ole kilpailusta johtuvaa pakotetta pitää liiketoimintaa tehokkaana ja hintoja alhaisina. Tämän vuoksi verkkoliiketoiminnan hinnoittelua ja toiminnan tehokkuutta tulee valvoa viranomaisen toimesta. Tehokkuusmittauksessa käytettäväksi menetelmäksi on valittu DEA-menetelmä (Data Envelopment Analysis). Tässä työssä on esitelty DEA-menetelmän teoreettiset perusteet sekä verkkoliiketoiminnan tehokkuusmittauksessa havaitut ongelmat. Näiden perusteella on määritelty analysointijärjestelmältä vaadittavat ominaisuudet sekä kehitetty kyseinen järjestelmä. Tärkeimmiksi järjestelmän ominaisuuksiksi osoittautuivat herkkyysanalyysin tekeminen ja etenkin sitä kautta suoritettava keskeytysten hinnan laskeminen sekä mahdollisuudet painokertoimien rajoittamiselle. Työn loppuosassa on esitelty järjestelmästä saatavia konkreettisia tuloksia, joiden avulla on pyritty havainnollistamaan järjestelmän käyttömahdollisuuksia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substances emitted into the atmosphere by human activities in urban and industrial areas cause environmental problems such as air quality degradation, respiratory diseases, climate change, global warming, and stratospheric ozone depletion. Volatile organic compounds (VOCs) are major air pollutants, emitted largely by industry, transportation and households. Many VOCs are toxic, and some are considered to be carcinogenic, mutagenic, or teratogenic. A wide spectrum of VOCs is readily oxidized photocatalytically. Photocatalytic oxidation (PCO) over titanium dioxide may present a potential alternative to air treatment strategies currently in use, such as adsorption and thermal treatment, due to its advantageous activity under ambient conditions, although higher but still mild temperatures may also be applied. The objective of the present research was to disclose routes of chemical reactions, estimate the kinetics and the sensitivity of gas-phase PCO to reaction conditions in respect of air pollutants containing heteroatoms in their molecules. Deactivation of the photocatalyst and restoration of its activity was also taken under consideration to assess the practical possibility of the application of PCO to the treatment of air polluted with VOCs. UV-irradiated titanium dioxide was selected as a photocatalyst for its chemical inertness, non-toxic character and low cost. In the present work Degussa P25 TiO2 photocatalyst was mostly used. In transient studies platinized TiO2 was also studied. The experimental research into PCO of following VOCs was undertaken: - methyl tert-butyl ether (MTBE) as the basic oxygenated motor fuel additive and, thus, a major non-biodegradable pollutant of groundwater; - tert-butyl alcohol (TBA) as the primary product of MTBE hydrolysis and PCO; - ethyl mercaptan (ethanethiol) as one of the reduced sulphur pungent air pollutants in the pulp-and-paper industry; - methylamine (MA) and dimethylamine (DMA) as the amino compounds often emitted by various industries. The PCO of VOCs was studied using a continuous-flow mode. The PCO of MTBE and TBA was also studied by transient mode, in which carbon dioxide, water, and acetone were identified as the main gas-phase products. The volatile products of thermal catalytic oxidation (TCO) of MTBE included 2-methyl-1-propene (2-MP), carbon monoxide, carbon dioxide and water; TBA decomposed to 2-MP and water. Continuous PCO of 4 TBA proceeded faster in humid air than dry air. MTBE oxidation, however, was less sensitive to humidity. The TiO2 catalyst was stable during continuous PCO of MTBE and TBA above 373 K, but gradually lost activity below 373 K; the catalyst could be regenerated by UV irradiation in the absence of gas-phase VOCs. Sulphur dioxide, carbon monoxide, carbon dioxide and water were identified as ultimate products of PCO of ethanethiol. Acetic acid was identified as a photocatalytic oxidation by-product. The limits of ethanethiol concentration and temperature, at which the reactor performance was stable for indefinite time, were established. The apparent reaction kinetics appeared to be independent of the reaction temperature within the studied limits, 373 to 453 K. The catalyst was completely and irreversibly deactivated with ethanethiol TCO. Volatile PCO products of MA included ammonia, nitrogen dioxide, nitrous oxide, carbon dioxide and water. Formamide was observed among DMA PCO products together with others similar to the ones of MA. TCO for both substances resulted in the formation of ammonia, hydrogen cyanide, carbon monoxide, carbon dioxide and water. No deactivation of the photocatalyst during the multiple long-run experiments was observed at the concentrations and temperatures used in the study. PCO of MA was also studied in the aqueous phase. Maximum efficiency was achieved in an alkaline media, where MA exhibited high fugitivity. Two mechanisms of aqueous PCO – decomposition to formate and ammonia, and oxidation of organic nitrogen directly to nitrite - lead ultimately to carbon dioxide, water, ammonia and nitrate: formate and nitrite were observed as intermediates. A part of the ammonia formed in the reaction was oxidized to nitrite and nitrate. This finding helped in better understanding of the gasphase PCO pathways. The PCO kinetic data for VOCs fitted well to the monomolecular Langmuir- Hinshelwood (L-H) model, whereas TCO kinetic behaviour matched the first order process for volatile amines and the L-H model for others. It should be noted that both LH and the first order equations were only the data fit, not the real description of the reaction kinetics. The dependence of the kinetic constants on temperature was established in the form of an Arrhenius equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen tarkoituksensa oli luoda kokonaiskuva strategian toimeenpanoprosessista ja tuoda esiin case –yrityksen strategian toimeenpanon haasteet. Case –yritys toimii tietoliikennepalvelujen toimialalla, joka on ollut jatkuvassa muutoksessa. Yritys käynnisti strategiaprosessin vuonna 2002, jonka seurauksena liiketoiminnan painopistettä muutettiin palveluliiketoimintaan. Strategian toimeenpano ei sujunut haasteitta. Yritys on käynnistämässä uutta strategiaprosessia. Jotta edellisen strategian toimeenpanon ongelmat vältettäisiin, haluttiin tutkia, mikä on case –yrityksen strategian toimeenpanon taso tällä hetkellä ja tunnistaa case -yrityksen strategian toimeenpanemisen kehittämiskohdat. Tutkimus oli luonteeltaan kvalitatiivinen case –tutkimus, joka toteutettiin teemahaastatteluin. Tulokset osoittivat, että case –yrityksessä strategian toimeenpanoa voidaan kehittää erityisesti selkeyttämällä visio, lisäämällä johtajuutta ja visionäärisyyttä. Strategian toimeenpanossa johdolta vaaditaan myyntitaitoja: selkeää päämäärää, viestintää, luottamuksen tunteen luomista ja herkkyyttä kuunnella henkilöstön tuntoja. Johdon on itse omalla esimerkillään tehtävä tämä myyntityö. Hyvä visio poistaa muutosvastarintaa ja ohjaa oikeisiin päätöksiin. Ilman hyvää visiota, strategiasta voi tulla toimeenpanokelvoton. Strateginen johtaminen jatkuvana oppimisprosessina antaa hyvät mahdollisuudet tunnistaa toimintaympäristön muutokset. Strategiaprosessi kasvattaa koko yrityksen visionäärisyyttä samalla sitouttaen strategian toimeenpanoon. Lisäksi strategiaprosessi auttaa luomaan yritykseen joustavan oppivan organisaation kulttuurin, joka on edellytys kilpailukyvyn säilymiseen muuttuvassa toimintaympäristössä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation analyses the growing pool of copyrighted works, which are offered to the public using Creative Commons licensing. The study consist of analysis of the novel licensing system, the licensors, and the changes of the "all rights reserved" —paradigm of copyright law. Copyright law reserves all rights to the creator until seventy years have passed since her demise. Many claim that this endangers communal interests. Quite often the creators are willing to release some rights. This, however, is very difficult to do and needs help of specialized lawyers. The study finds that the innovative Creative Commons licensing scheme is well suited for low value - high volume licensing. It helps to reduce transaction costs on several le¬vels. However, CC licensing is not a "silver bullet". Privacy, moral rights, the problems of license interpretation and license compatibility with other open licenses and collecting societies remain unsolved. The study consists of seven chapters. The first chapter introduces the research topic and research questions. The second and third chapters inspect the Creative Commons licensing scheme's technical, economic and legal aspects. The fourth and fifth chapters examine the incentives of the licensors who use open licenses and describe certain open business models. The sixth chapter studies the role of collecting societies and whether two institutions, Creative Commons and collecting societies can coexist. The final chapter summarizes the findings. The dissertation contributes to the existing literature in several ways. There is a wide range of prior research on open source licensing. However, there is an urgent need for an extensive study of the Creative Commons licensing and its actual and potential impact on the creative ecosystem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diplomityössä tutkitaan hitsausprosessien kehitystä. Työn kirjallisen osan alku kuvaa hitsauksen nykypäivää ja tulevaisuutta sekä millainen on hitsaava Suomi. Kehittyneiden hitsausprosessien tarkastelu on jaettu hiiliterästen ja alumiinien hitsausprosesseihin. Hiiliteräksien hitsauksen osalta työssä esitellään kitkahitsaus pyörivällä työkalulla, muunnettu lyhytkaarihitsaus, laserhitsaus, laser-hybridihitsaus ja kapearailohitsaus. Alumiinien hitsauksen osalta työssä esitellään laserhitsaus, muunnettu lyhytkaarihitsaus, kitkahitsaus pyörivällä työkalulla ja vaihtovirta MIG hitsaus. Diplomityön käytännönosuudessa todennettiin hitsausprosessien kehitys. Ensimmäisissä hitsauskokeissa hitsattiin merialumiinia eri kaarityypeillä. Vertailua tehdään pulssihitsauksen, lankapulssihitsauksen sekä CMT-kaarihitsauksen välillä. Koehitsaukset osoittavat CMT-hitsauksen tuottavan MIG-pulssihitsausta pienemmät hitsausmuodonmuutokset. CMT-hitsauksessa alumiinin oksidikerros aiheuttaa MIGpulssihitsausta vähemmän ongelmia, sillä kaari syttyy varmemmin suurillakin hitsausnopeuksilla, eikä hitsiin synny huokosia. Hitsausnopeudella 40 cm/min lankapulssihitsauksella ja MIG-pulssihitsauksella päittäisliitoksena hitsattujen vesileikattujen alumiinikappaleiden hitseihin ei syntynyt huokosia. Kokeen perusteella voidaan todeta, ettei oksidikerroksella ollut vaikutusta hitsin onnistumiseen. Hitsauskokeiden toinen osio tutkii hiilimangaaniteräksisen T-palkin kuitulaserhitsausta. Viiden kilowatin laserteholla hitsattiin onnistuneesti viisi metriä pitkiä T-palkkeja hitsausnopeudella 2 m/min. Takymetrimittauksella ja Tritop 3D-koordinaattimittauksella todennettiin laserhitsatun T-palkin hitsausmuodonmuutosten olevan huomattavasti Twin-jauhekaarihitsauksella hitsattua T-palkkia pienemmät.