23 resultados para Modified Information Criteria
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Taloussuhdanteiden yhteisvaihtelun tutkimus on eräs taloustieteiden vanhimmista tutkimusaloista. Finanssikriisi ja euroalueen kohtaamat talousvaikeudet ovat kuitenkin nostaneet aiheen jälleen hyvin ajankohtaiseksi. Kuluneiden kahdenkymmenen vuoden aikana tutkimusalueesta on muodostunut erittäin laaja lukuisine näkökulmineen ja debatteineen. Tutkielman aiheena on Suomen taloussuhdanteiden kansainvälinen yhteisvaihtelu valittujen vertailumaiden kanssa. Vertailumaat ovat Ruotsi, Norja, Tanska, Saksa, Ranska, Iso-Britannia ja Yhdysvallat. Tutkielmaan valitut taloussuhdannetta kuvaavat muuttujat ovat reaalinen bruttokansantuote, yksityinen kokonaiskulutus ja teollisuustuotantoindeksi. Aineisto on kerätty Lappeenrannan tiedekirjaston Nelli-portaalin OECD iLibrary-tietokannasta ja se kattaa aikajakson 1960 Q1- 2014 Q4. Maakohtainen taloussuhdanne operationalisoidaan laskemalla ensimmäinen logaritminen differenssi, joka edustaa perinteistä reaalisuhdanneteoreettisen koulukunnan näkemystä taloussuhdanteesta. Tutkielman näkökulmaksi valitaan yhden maan näkökulma, joka on hieman harvinaisempi näkökulma verrattuna laajempiin alueellisiin näkökulmiin. Tutkimusmenetelminä käytetään Pearsonin korrelaatiokerrointa, Engle-Granger- sekä Johansenin yhteisintegroituvuustestejä ja VAR-GARCH-BEKK –mallilla laskettua dynaamista korrelaatiota, jotka lasketaan Suomen ja vertailumaiden välille maapareittain. Tuloksia tulkitaan suomalaisen vientiä vertailumaihin suunnittelevan yrityksen näkökulmasta. Tutkielman tulosten perusteella Engle-Grangerin menetelmällä laskettu samanaikainen yhteisintegroituvuus Suomen ja vertailumaiden välillä on epätodennäköistä. Kun yhteisintegroituvuuden annetaan riippua myös viiveistä, saadaan Johansenin menetelmällä yhteisintegroituvuus Suomen ja Yhdysvaltojen välille reaalisessa bruttokansantuotteessa, Suomen ja Saksan, Suomen ja Ranskan sekä Suomen ja Yhdysvaltojen välille yksityisessä kokonaiskulutuksessa sekä Suomen ja Norjan välille teollisuustuotantoindeksissä. Tulosten tulkintaa vaikeuttavat niiden malliriippuvuus ja informaatiokriteerien toisistaan poikkeavat mallisuositukset, joten yhteisintegroituvuus on mahdollinen myös muiden maaparien kohdalla. Dynaamisten korrelaatiokuvaajien perusteella maaparien välisen yhteisvaihtelun voimakkuus muuttuu ajan mukana. Finanssikriisin aikana kokonaistuotannossa on havaittavissa korkeampi korrelaatio, mutta korrelaatio palaa sen jälkeen perustasolleen. Kokonaiskulutuksen korrelaatio on kokonaistuotantoa alhaisempi ja pitemmissä aikajaksoissa vaihtelevaa.
Resumo:
Diplomityössä tutkitaan kolmea erilaista virtausongelmaa CFD-mallinnuksella. Yhteistä näille ongelmille on virtaavana aineena oleva ilma. Lisäksi tapausten perinteinen mittaus on erittäin vaikeaa tai mahdotonta. Ensimmäinen tutkimusongelma on tarrapaperirainan kuivain, jonka tuotantomäärä halutaan nostaa kaksinkertaiseksi. Tämä vaatii kuivatustehon kaksinkertaistamista, koska rainan viipymäaika kuivausalueella puolittuu. Laskentayhtälöillä ja CFD-mallinnuksella tutkitaan puhallussuihkun nopeuden ja lämpötilan muutoksien vaikutusta rainan pinnan lämmön- ja massansiirtokertoimiin. Tuloksena saadaan varioitujen suureiden sekä massan- ja lämmönsiirtokertoimien välille riippuvuuskäyrät, joiden perusteella kuivain voidaan säätää parhaallamahdollisella tavalla. Toinen ongelma käsittelee suunnitteilla olevan kuparikonvertterin sekundaarihuuvan sieppausasteen optimointia. Ilman parannustoimenpiteitä käännetyn konvertterin päästöistä suurin osa karkaa ohi sekundaarihuuvan. Tilannetta tutkitaan konvertterissa syntyvän konvektiivisen nostevirtauksen eli päästöpluumin sekä erilaisten puhallussuihkuratkaisujen CFD-mallinnuksella. Tuloksena saadaan puhallussuihkuilla päästöpluumia poikkeuttava ilmaverho. Suurin osa nousevasta päästöpluumista indusoituu ilmaverhoon ja kulkeutuu poistokanavaan. Kolmas tutkittava kohde on suunnitteilla oleva kuparielektrolyysihalli, jossa ilmanvaihtoperiaatteena on luonnollinen ilmanvaihto ja mekaaninen happosumun keräysjärjestelmä. Ilmanvaihtosysteemin tehokkuus ja sisäilman virtaukset halutaan selvittää ennen hallin rakentamista. CFD-mallinnuksella ja laskentayhtälöillä tutkitaan lämpötila- ja virtauskentät sekä hallin läpi virtaava ilmamäärä ja ilmanvaihtoaste. Tulo- ja poistoilma-aukkojen mitoitukseen ja sijoitukseen liittyvät suunnitteluarvot varmennetaan sekä löydetään ilmanvaihdon ongelmakohdat. Ongelmakohtia tutkitaan ja niille esitetään parannusehdotukset.
Resumo:
The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.
Resumo:
The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.
Resumo:
Fatigue life assessment of weldedstructures is commonly based on the nominal stress method, but more flexible and accurate methods have been introduced. In general, the assessment accuracy is improved as more localized information about the weld is incorporated. The structural hot spot stress method includes the influence of macro geometric effects and structural discontinuities on the design stress but excludes the local features of the weld. In this thesis, the limitations of the structural hot spot stress method are discussed and a modified structural stress method with improved accuracy is developed and verified for selected welded details. The fatigue life of structures in the as-welded state consists mainly of crack growth from pre-existing cracks or defects. Crack growth rate depends on crack geometry and the stress state on the crack face plane. This means that the stress level and shape of the stress distribution in the assumed crack path governs thetotal fatigue life. In many structural details the stress distribution is similar and adequate fatigue life estimates can be obtained just by adjusting the stress level based on a single stress value, i.e., the structural hot spot stress. There are, however, cases for which the structural stress approach is less appropriate because the stress distribution differs significantly from the more common cases. Plate edge attachments and plates on elastic foundations are some examples of structures with this type of stress distribution. The importance of fillet weld size and weld load variation on the stress distribution is another central topic in this thesis. Structural hot spot stress determination is generally based on a procedure that involves extrapolation of plate surface stresses. Other possibilities for determining the structural hot spot stress is to extrapolate stresses through the thickness at the weld toe or to use Dong's method which includes through-thickness extrapolation at some distance from the weld toe. Both of these latter methods are less sensitive to the FE mesh used. Structural stress based on surface extrapolation is sensitive to the extrapolation points selected and to the FE mesh used near these points. Rules for proper meshing, however, are well defined and not difficult to apply. To improve the accuracy of the traditional structural hot spot stress, a multi-linear stress distribution is introduced. The magnitude of the weld toe stress after linearization is dependent on the weld size, weld load and plate thickness. Simple equations have been derived by comparing assessment results based on the local linear stress distribution and LEFM based calculations. The proposed method is called the modified structural stress method (MSHS) since the structural hot spot stress (SHS) value is corrected using information on weld size andweld load. The correction procedure is verified using fatigue test results found in the literature. Also, a test case was conducted comparing the proposed method with other local fatigue assessment methods.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
Tämä työ esittelee uuden tarjota paikasta riippuvaa tietoa langattomien tietoverkkojen käyttäjille. Tieto välitetään jokaiselle käyttäjälle tietämättä mitään käyttäjän henkilöllisyydestä. Sovellustason protokollaksi valittiin HTTP, joka mahdollistaa tämän järjestelmän saattaa tietoa perille useimmille käyttäjille, jotka käyttävät hyvinkin erilaisia päätelaitteita. Tämä järjestelmä toimii sieppaavan www-liikenteen välityspalvelimen jatkeena. Erilaisten tietokantojen sisällä on perusteella järjestelmä päättää välitetäänkö tietoa vai ei. Järjestelmä sisältää myös yksinkertaisen ohjelmiston käyttäjien paikantamiseksi yksittäisen tukiaseman tarkkuudella. Vaikka esitetty ratkaisu tähtääkin paikkaan perustuvien mainosten tarjoamiseen, se on helposti muunnettavissa minkä tahansa tyyppisen tiedon välittämiseen käyttäjille.
Resumo:
Laatu on osaltaan vahvistamassa asemaansa liike-elämässä yritysten kilpaillessa kansainvälisillä markkinoilla niin hinnalla kuin laadulla. Tämä suuntaus on synnyttänyt useita laatuohjelmia, joita käytetään ahkerasti yritysten kokonais- valtaisen laatujohtamisen (TQM) toteuttamisessa. Laatujohtaminen kattaa yrityksen kaikki toiminnot ja luo vaatimuksia myös yrityksen tukitoimintojen kehittämiselle ja parantamiselle. Näihin lukeutuu myös tämän tutkimuksen kohde tietohallinto (IT). Tutkielman tavoitteena oli kuvata IT prosessin nykytila. Tutkielmassa laadittu prosessikuvaus pohjautuu prosessijohtamisen teoriaan ja kohdeyrityksen käyttämään laatupalkinto kriteeristöön. Tutkimusmenetelmänä prosessin nykytilan selvittämiseksi käytettiin teemahaastattelutta. Prosessin nykytilan ja sille asetettujen vaatimusten selvittämiseksi haastateltiin IT prosessin asiakkaita. Prosessianalyysi, tärkeimpien ala-prosessien tunnistaminen ja parannusalueiden löytäminen ovat tämän tutkielman keskeisemmät tulokset. Tutkielma painottui IT prosessin heikkouksien ja parannuskohteiden etsimiseen jatkuvan kehittämisen pohjaksi, ei niinkään prosessin radikaaliin uudistamiseen. Tutkielmassa esitellään TQM:n periaatteet, laatutyökaluja sekä prosessijohtamisen terminologia, periaatteet ja sen systemaattinen toteutus. Työ antaa myös kuvan siitä, miten TQM ja prosessijohtaminen niveltyvät yrityksen laatutyössä.
Resumo:
The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.
Resumo:
Over the past decade, organizations worldwide have begun to widely adopt agile software development practices, which offer greater flexibility to frequently changing business requirements, better cost effectiveness due to minimization of waste, faster time-to-market, and closer collaboration between business and IT. At the same time, IT services are continuing to be increasingly outsourced to third parties providing the organizations with the ability to focus on their core capabilities as well as to take advantage of better demand scalability, access to specialized skills, and cost benefits. An output-based pricing model, where the customers pay directly for the functionality that was delivered rather than the effort spent, is quickly becoming a new trend in IT outsourcing allowing to transfer the risk away from the customer while at the same time offering much better incentives for the supplier to optimize processes and improve efficiency, and consequently producing a true win-win outcome. Despite the widespread adoption of both agile practices and output-based outsourcing, there is little formal research available on how the two can be effectively combined in practice. Moreover, little practical guidance exists on how companies can measure the performance of their agile projects, which are being delivered in an output-based outsourced environment. This research attempted to shed light on this issue by developing a practical project monitoring framework which may be readily applied by organizations to monitor the performance of agile projects in an output-based outsourcing context, thus taking advantage of the combined benefits of such an arrangement Modified from action research approach, this research was divided into two cycles, each consisting of the Identification, Analysis, Verification, and Conclusion phases. During Cycle 1, a list of six Key Performance Indicators (KPIs) was proposed and accepted by the professionals in the studied multinational organization, which formed the core of the proposed framework and answered the first research sub-question of what needs to be measured. In Cycle 2, a more in-depth analysis was provided for each of the suggested Key Performance Indicators including the techniques for capturing, calculating, and evaluating the information provided by each KPI. In the course of Cycle 2, the second research sub-question was answered, clarifying how the data for each KPI needed to be measured, interpreted, and acted upon. Consequently, after two incremental research cycles, the primary research question was answered describing the practical framework that may be used for monitoring the performance of agile IT projects delivered in an output-based outsourcing context. This framework was evaluated by the professionals within the context of the studied organization and received positive feedback across all four evaluation criteria set forth in this research, including the low overhead of data collection, high value of provided information, ease of understandability of the metric dashboard, and high generalizability of the proposed framework.
Resumo:
Metadata in increasing levels of sophistication has been the most powerful concept used in management of unstructured information ever since the first librarian used the Dewey decimal system for library classifications. It remains to be seen, however, what the best approach is to implementing metadata to manage huge volumes of unstructured information in a large organization. Also, once implemented, how is it possible to track whether it is adding value to the company, and whether the implementation has been successful? Existing literature on metadata seems to either focus too much on technical and quality aspects or describe issues with respect to adoption for general information management initiatives. This research therefore, strives to contribute to these gaps: to give a consolidated framework for striving to understand the value added by implementing metadata. The basic methodology used is that of case study, which incorporates aspects of design science, surveys, and interviews in order to provide a holistic approach to quantitative and qualitative analysis of the case. The research identifies the various approaches to implementing metadata, particularly studying the one followed by the unit of analysis of case study, a large company in the Oil and Gas Sector. Of the three approaches identified, the selected company already follows an approach that appears to be superior. The researcher further explores its shortcomings, and proposes a slightly modified approach that can handle them. The research categorically and thoroughly (in context) identifies the top effectiveness criteria, and corresponding key performance indicators(KPIs) that can be measured to understand the level of advancement of the metadata management initiative in the company. In an effort to contrast and have a basis of comparison for the findings, the research also includes views from information managers dealing with core structured data stored in ERPs and other databases. In addition, the results include the basic criteria that can be used to evaluate metrics, in order to classify a metric as a KPI.
Resumo:
Ceramides comprise a class of sphingolipids that exist only in small amounts in cellular membranes, but which have been associated with important roles in cellular signaling processes. The influences that ceramides have on the physical properties of bilayer membranes reach from altered thermodynamical behavior to significant impacts on the molecular order and lateral distribution of membrane lipids. Along with the idea that the membrane physical state could influence the physiological state of a cell, the membrane properties of ceramides have gained increasing interest. Therefore, membrane phenomena related to ceramides have become a subject of intense study both in cellular as well as in artificial membranes. Artificial bilayers, the so called model membranes, are substantially simpler in terms of contents and spatio-temporal variation than actual cellular membranes, and can be used to give detailed information about the properties of individual lipid species in different environments. This thesis focuses on investigating how the different parts of the ceramide molecule, i.e., the N-linked acyl chain, the long-chain sphingoid base and the membrane-water interface region, govern the interactions and lateral distribution of these lipids in bilayer membranes. With the emphasis on ceramide/sphingomyelin(SM)-interactions, the relevance of the size of the SMhead group for the interaction was also studied. Ceramides with methylbranched N-linked acyl chains, varying length sphingoid bases, or methylated 2N (amide-nitrogen) and 3O (C3-hydroxyl) at the interface region, as well as SMs with decreased head group size, were synthesized and their bilayer properties studied by calorimetric and fluorescence spectroscopic techniques. In brief, the results showed that the packing of the ceramide acyl chains was more sensitive to methyl-branching in the mid part than in the distal end of the N-linked chain, and that disrupting the interfacial structure at the amide-nitrogen, as opposed to the C3-hydroxyl, had greater effect on the interlipid interactions of ceramides. Interestingly, it appeared that the bilayer properties of ceramides could be more sensitive to small alterations in the length of the long-chain base than what was previously reported for the N-linked acyl chain. Furthermore, the data indicated that the SM-head group does not strongly influence the interactions between SMs and ceramides. The results in this thesis illustrate the pivotal role of some essential parts of the ceramide molecules in determining their bilayer properties. The thesis provides increased understanding of the molecular aspects of ceramides that possibly affect their functions in biological membranes, and could relate to distinct effects on cell physiology.
Resumo:
Inorganic-organic sol-gel hybrid coatings can be used for improving and modifying properties of wood-based materials. By selecting a proper precursor, wood can be made water repellent, decay-, moisture- or UV-resistant. However, to control the barrier properties of sol-gel coatings on wood substrates against moisture uptake and weathering, an understanding of the surface morphology and chemistry of the deposited sol-gel coatings on wood substrates is needed. Mechanical pulp is used in production of wood-containing printing papers. The physical and chemical fiber surface characteristics, as created in the chosen mechanical pulp manufacturing process, play a key role in controlling the properties of the end-use product. A detailed understanding of how process parameters influence fiber surfaces can help improving cost-effectiveness of pulp and paper production. The current work focuses on physico-chemical characterization of modified wood-based materials with surface sensitive analytical tools. The overall objectives were, through advanced microscopy and chemical analysis techniques, (i) to collect versatile information about the surface structures of Norway spruce thermomechanical pulp fiber walls and understand how they are influenced by the selected chemical treatments, and (ii) to clarify the effect of various sol-gel coatings on surface structural and chemical properties of wood-based substrates. A special emphasis was on understanding the effect of sol-gel coatings on the water repellency of modified wood and paper surfaces. In the first part of the work, effects of chemical treatment on micro- and nano-scale surface structure of 1st stage TMP latewood fibers from Norway spruce were investigated. The chemicals applied were buffered sodium oxalate and hydrochloric acid. The outer and the inner fiber wall layers of the untreated and chemically treated fibers were separately analyzed by light microscopy, atomic force microscopy and field-emission scanning electron microscopy. The selected characterization methods enabled the demonstration of the effect of different treatments on the fiber surface structure, both visually and quantitatively. The outer fiber wall areas appeared as intact bands surrounding the fiber and they were clearly rougher than areas of exposed inner fiber wall. The roughness of the outer fiber wall areas increased most in the sodium oxalate treatment. The results indicated formation of more surface pores on the exposed inner fiber wall areas than on the corresponding outer fiber wall areas as a result of the chemical treatments. The hydrochloric acid treatment seemed to increase the surface porosity of the inner wall areas. In the second part of the work, three silane-based sol-gel hybrid coatings were selected in order to improve moisture resistance of wood and paper substrates. The coatings differed from each other in terms of having different alkyl (CH3–, CH3-(CH2)7–) and fluorocarbon (CF3–) chains attached to the trialkoxysilane sol-gel precursor. The sol-gel coatings were deposited by a wet coating method, i.e. spraying or spreading by brush. The effect of solgel coatings on surface structural and chemical properties of wood-based substrates was studied by using advanced surface analyzing tools: atomic force microscopy, X-ray photoelectron spectroscopy and time-of-flight secondary ion spectroscopy. The results show that the applied sol-gel coatings, deposited as thin films or particulate coatings, have different effects on surface characteristics of wood and wood-based materials. The coating which has a long hydrocarbon chain (CH3-(CH2)7–) attached to the silane backbone (octyltriethoxysilane) produced the highest hydrophobicity for wood and wood-based materials.
Resumo:
Bio-ethanol has been used as a fuel additive in modern society aimed at reducing CO2-emissions and dependence on oil. However, ethanol is unsuitable as fuel supplement in higher proportions due to its physico-chemical properties. One option to counteract the negative effects is to upgrade ethanol in a continuous fixed bed reactor to more valuable C4 products such as 1-butanol providing chemical similarity with traditional gasoline components. Bio-ethanol based valorization products also have other end-uses than just fuel additives. E.g. 1-butanol and ethyl acetate are well characterised industrial solvents and platform chemicals providing greener alternatives. The modern approach is to apply heterogeneous catalysts in the investigated reactions. The research was concentrated on aluminium oxide (Al2O3) and zeolites that were used as catalysts and catalyst supports. The metals supported (Cu, Ni, Co) gave very different product profiles and, thus, a profound view of different catalyst preparation methods and characterisation techniques was necessary. Additionally, acidity and basicity of the catalyst surface have an important role in determining the product profile. It was observed that ordinary determination of acid strength was not enough to explain all the phenomena e.g. the reaction mechanism. One of the main findings of the thesis is based on the catalytically active site which originates from crystallite structure. As a consequence, the overall evaluation of different by-products and intermediates was carried out by combining the information. Further kinetic analysis was carried out on metal (Cu, Ni, Co) supported self-prepared alumina catalysts. The thesis gives information for further catalyst developments aimed to scale-up towards industrially feasible operations.
Resumo:
In recent decades, industrial activity growth and increasing water usage worldwide have led to the release of various pollutants, such as toxic heavy metals and nutrients, into the aquatic environment. Modified nanocellulose and microcellulose-based adsorption materials have the potential to remove these contaminants from aqueous solutions. The present research consisted of the preparation of five different nano/microcellulose-based adsorbents, their characterization, the study of adsorption kinetics and isotherms, the determination of adsorption mechanisms, and an evaluation of adsorbents’ regeneration properties. The same well known reactions and modification methods that were used for modifying conventional cellulose also worked for microfibrillated cellulose (MFC). The use of succinic anhydride modified mercerized nanocellulose, and aminosilane and hydroxyapatite modified nanostructured MFC for the removal of heavy metals from aqueous solutions exhibited promising results. Aminosilane, epoxy and hydroxyapatite modified MFC could be used as a promising alternative for H2S removal from aqueous solutions. In addition, new knowledge about the adsorption properties of carbonated hydroxyapatite modified MFC as multifunctional adsorbent for the removal of both cations and anions ions from water was obtained. The maghemite nanoparticles (Fe3O4) modified MFC was found to be a highly promising adsorbent for the removal of As(V) from aqueous solutions due to its magnetic properties, high surface area, and high adsorption capacity . The maximum removal efficiencies of each adsorbent were studied in batch mode. The results of adsorption kinetics indicated very fast removal rates for all the studied pollutants. Modeling of adsorption isotherms and adsorption kinetics using various theoretical models provided information about the adsorbent’s surface properties and the adsorption mechanisms. This knowledge is important for instance, in designing water treatment units/plants. Furthermore, the correspondence between the theory behind the model and properties of the adsorbent as well as adsorption mechanisms were also discussed. On the whole, both the experimental results and theoretical considerations supported the potential applicability of the studied nano/microcellulose-based adsorbents in water treatment applications.