32 resultados para Microstructural refinement
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Papper kan anses vara ett av de mest använda materialen i det dagliga livet. Tidskrifter, tidningar, böcker och diverse förpackningar är några exempel på pappersbaserade produkter. Papperets egenskaper måste anpassas till användningsändamålet. En tidskrift kräver t.ex. hög ljushet, opacitet och en slät yta hos papperet, medan dessa egenskaper är mindre viktiga för en dagstidning. Allt tryckpapper behöver vissa mekaniska egenskaper för att tåla vidarebearbetning såsom kalandrering, tryckning och vikning. Man kan bestryka papper för att förbättra dess optiska egenskaper och tryckbarhetsegenskaper. Vid bestrykning appliceras en dispersion av mineralpigment och polymerbindemedel som ett tunt lager på papperets yta. Bestrykningsskiktet kan ses som ett komplext, poröst kompositmaterial som även bidrar till papperets mekaniska egenskaper och dess processerbarhet i diverse konverteringsoperationer. Kravet på framställning av förmånligt papper med tillräckliga styrkeegenskaper ställer allt högre krav på optimeringen av pappersbestrykningsskiktets egenskaper och produktionskostnader. Målet med detta arbete var att förstå sambandet mellan pigmentbestrykningsskiktets mikrostruktur och dess makroskopiska, mekaniska egenskaper. Resultaten visar att adhesionen i gränsytan mellan pigment och bindemedel är kritisk för bestrykningsskiktets förmåga att bära mekanisk belastning. Polära vätskor är vanliga i tryckfärger och kan, eftersom de påverkar syra/bas-interaktionerna mellan pigment och latexbindemedel, försvaga denna adhesion. Resultaten tyder på att ytstyrkan hos bestruket papper kan höjas genom användning av bifunktionella dispergeringsmedel för mineralpigment. Detta medför inbesparingar i pappersproduktionen eftersom mängden bindemedel, den dyraste komponenten i bestrykningsskiktet, kan minskas.
Resumo:
The need for industries to remain competitive in the welding business, has created necessity to develop innovative processes that can exceed customer’s demand. Significant development in improving weld efficiency, during the past decades, still have their drawbacks, specifically in the weld strength properties. The recent innovative technologies have created smallest possible solid material known as nanomaterial and their introduction in welding production has improved the weld strength properties and to overcome unstable microstructures in the weld. This study utilizes a qualitative research method, to elaborate the methods of introducing nanomaterial to the weldments and the characteristic of the welds produced by different welding processes. The study mainly focuses on changes in the microstructural formation and strength properties on the welded joint and also discusses those factors influencing such improvements, due to the addition of nanomaterials. The effect of nanomaterial addition in welding process modifies the physics of joining region, thereby, resulting in significant improvement in the strength properties, with stable microstructure in the weld. The addition of nanomaterials in the welding processes are, through coating on base metal, addition in filler metal and utilizing nanostructured base metal. However, due to its insignificant size, the addition of nanomaterials directly to the weld, would poses complications. The factors having major influence on the joint integrity are dispersion of nanomaterials, characteristics of the nanomaterials, quantity of nanomaterials and selection of nanomaterials. The addition of nanomaterials does not affect the fundamental properties and characteristics of base metals and the filler metal. However, in some cases, the addition of nanomaterials lead to the deterioration of the joint properties by unstable microstructural formations. Still research are ongoing to achieve high joint integrity, in various materials through different welding processes and also on other factors that influence the joint strength.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
Seudullinen innovaatio on monimutkainen ilmiö, joka usein sijaitsee paikallisten toimijoiden keskinäisen vuorovaikutuksen kentässä. Täten sitä on perinteisesti pidetty vaikeasti mitattavana ilmiönä. Työssä sovellettiin Data Envelopment Analysis menetelmää, joka on osoittautunut aiemmin menestyksekkääksi tapauksissa, joissa mitattavien syötteiden ja tuotteiden väliset suhteet eivät ole olleet ilmeisiä. Työssä luotiin konseptuaalinen malli seudullisen innovaation syötteistä ja tuotteista, jonka perusteella valittiin 12 tilastollisen muuttujan mittaristo. Käyttäen Eurostat:ia datalähteenä, lähdedata kahdeksaan muuttujsta saatiin seudullisella tasolla, sekä mittaristoa täydennettiin yhdellä kansallisella muuttujalla. Arviointi suoritettiin lopulta 45 eurooppalaiselle seudulle. Tutkimuksen painopiste oli arvioida DEA-menetelmän soveltuvuutta innovaatiojärjestelmän mittaamiseen, sillä menetelmää ei ole aiemmin sovellettu vastaavassa tapauksessa. Ensimmäiset tulokset osoittivat ylipäätään liiallisen korkeita tehokkuuslukuja. Korjaustoimenpiteitä erottelutarkkuuden parantamiseksi esiteltiin ja sovellettiin, jonka jälkeen saatiin realistisempia tuloksia ja ranking-lista arvioitavista seuduista. DEA-menetelmän todettiin olevan tehokas ja kiinnostava työkalu arviointikäytäntöjen ja innovaatiopolitiikan kehittämiseen, sikäli kun datan saatavuusongelmat saadaan ratkaistua sekä itse mallia tarkennettua.
Resumo:
Tyypin 1 diabeteksen perinnöllinen alttius Suomessa - HLA-alueen ulkopuolisten alttiuslokusten IDDM2 ja IDDM9 rooli taudin periytymisessä HLA-alue, joka sijaitsee kromosomissa 6p21.3, vastaa noin puolesta perinnöllisestä alttiudesta sairastua tyypin 1 diabetekseen. Myös HLA-alueen ulkopuolisten lokusten on todettu liittyvän sairausalttiuteen. Näistä kolmen lokuksen on varmistettu olevan todellisia alttiuslokuksia ja lisäksi useiden muiden, vielä varmistamattomien lokusten, on todettu liittyvän sairausalttiuteen. Tässä tutkimuksessa 12:n HLA-alueen ulkopuolisen alttiuslokuksen kytkentä tyypin 1 diabetekseen tutkittiin käyttäen 107:aa suomalaista multiplex-perhettä. Jatkotutkimuksessa analysoitiin IDDM9-alueen kytkentä ja assosiaatio sairauteen laajennetuissa perhemateriaaleissa sekä IDDM2-alueen mahdollinen interaktio HLA-alueen kanssa sairauden muodostumisessa. Lisäksi suoritettiin IDDM2-alueen suojaavien haplotyyppien alatyypitys tarkoituksena tutkia eri haplotyyppien käyttökelpoisuutta sairastumisriskin tarkempaa ennustamista varten. Ensimmäisessä kytkentätutkimuksessa ei löytynyt koko genomin tasolla merkitsevää tai viitteellistä kytkentää tutkituista HLA-alueen ulkopuolisista lokuksista. Voimakkain havaittu nimellisen merkitsevyyden tavoittava kytkentä nähtiin IDDM9-alueen markkerilla D3S3576 (MLS=1.05). Tutkimuksessa ei kyetty varmistamaan tai sulkemaan pois aiempia kytkentähavaintoja tutkituilla lokuksilla, mutta IDDM9-alueen jatkotutkimuksessa havaittu voimakas kytkentä (MLS=3.4) ja merkitsevä assosiaatio (TDT p=0.0002) viittaa vahvasti siihen, että 3q21-alueella sijaitsee todellinen tyypin 1 diabeteksen alttiusgeeni, jolloin alueen kattava assosiaatiotutkimus olisi perusteltu jatkotoimenpide. Sairauteen altistava IDDM2-alueen MspI-2221 genotyyppi CC oli nimellisesti yleisempi matalan tai kohtalaisen HLA-sairastumisriskin diabeetikoilla, verrattuna korkean HLA-riskin potilaisiin (p=0.05). Myös genotyyppijakauman vertailu osoitti merkitsevää eroa ryhmien välillä (p=0.01). VNTR-haplotyyppitutkimus osoitti, että IIIA/IIIA-homotsygootin sairaudelta suojaava vaikutus on merkitsevästi voimakkaampi kuin muiden luokka III:n genotyypeillä. Nämä tulokset viittaavat IDDM2-HLA -vuorovaikutukseen sekä siihen että IDDM2-alueen haplotyyppien välillä esiintyy etiologista heterogeniaa. Tämän johdosta IDDM2-alueen haplotyyppien tarkempi määrittäminen voisi tehostaa tyypin 1 diabeteksen riskiarviointia.
Resumo:
The thesis was made for Hyötypaperi Oy. To the business activity of the company belongs the recycling of materials and carry out for re-use and the manufacture of solid biofuels and solid recovered fuels. Hyötypaperi Oy delivers forest chips to its partner incineration plants by day and night though the year. The value of the forest chips is based on its percentage of dry material. It is important to dry forest chips well before its storage in piles and delivering to incineration plants. In the thesis was examed the increasing of the degree of refinement of forest chips by different drying methods. In the thesis was examined four different drying methods of the forest chips. The methods were field-, plate-, platform- and channel drying. In the channel drying was used a mechanical blower and the other drying methods were based on the weather conditions. By all drying methods were made test dryings during the summer 2007. In the thesis was examined also the economical profitableness of the field- and channel drying. The last examination in the thesis was measuring of the forest chips’s humidity by humidity measuring equipment of sawn timber during November 2007. The field drying on the surface of asphalt is the only method of drying, which is used by Hyötypaperi Oy in its own production. There do not exist earlier properly examined facts of any drying methods of forest chips, because the drying of forest chips is a new branch. By field- and platform drying achieved lower humidity of forest chips than by plate drying. The object by using the humidity measuring equipment was to be informed of the humidity of forest chips. At present the humidity will find out after 24 hours when the sample of humidity quantity has been dried in the oven. The Lappeenranta University of Technology had the humidity measuring equipment of sawn timber. The values of humidity measured by the equipment from the sample of forest chips were 2 – 9 percent lower than the real values of humidity specified by drying in oven.
Resumo:
Tärkeä osa paperiteollisuuden myyntitapahtumaa on tarkistaa tilatun tuotteen saatavuus ja toimittamisen aikataulu. Käytännössä tämä tarkoittaa kuljetusten, tuotannon ja valmistetun materiaalin tarkistamista. Tässä työssä on tehty olemassa olevan vapaan materiaalin tarkistaminen. Materiaalin tarkastus ei ole uusi idea, mutta kapasiteettivarauksen uudelleen toteutus on tehty tulevan ylläpitotyön helpottamiseksi ja järjestelmän suorituskyvyn parantamiseksi. Lisäksi uutta varauslogiikkaa pystytään käyttämään muissakin tuotannonohjausjärjestelmän ohjelmistoissa. Kapasiteettivaraukseen on myös rakennettu uusi kustannuspohjainen priorisointijärjestelmä, ja mietitty kuinka tätä olisi tulevaisuudessa helppo jalostaa. Erityishuomiota on kiinnitetty toiminnan läpinäkyvyyteen eli siihen, että tarkistuslogiikka kertoo syyt eri materiaalin hylkäämiseen. Työn yhteydessä on analysoitu materiaalivarauksen aiheuttamaa kuormaa järjestelmässä ja mietitty eri tekniikoita suorituskyvyn parantamiseksi.
Resumo:
Laser scanning is becoming an increasingly popular method for measuring 3D objects in industrial design. Laser scanners produce a cloud of 3D points. For CAD software to be able to use such data, however, this point cloud needs to be turned into a vector format. A popular way to do this is to triangulate the assumed surface of the point cloud using alpha shapes. Alpha shapes start from the convex hull of the point cloud and gradually refine it towards the true surface of the object. Often it is nontrivial to decide when to stop this refinement. One criterion for this is to do so when the homology of the object stops changing. This is known as the persistent homology of the object. The goal of this thesis is to develop a way to compute the homology of a given point cloud when processed with alpha shapes, and to infer from it when the persistent homology has been achieved. Practically, the computation of such a characteristic of the target might be applied to power line tower span analysis.
Resumo:
Airlift reactors are pneumatically agitated reactors that have been widely used in chemical, petrochemical, and bioprocess industries, such as fermentation and wastewater treatment. Computational Fluid Dynamics (CFD) has become more popular approach for design, scale-up and performance evaluation of such reactors. In the present work numerical simulations for internal-loop airlift reactors were performed using the transient Eulerian model with CFD package, ANSYS Fluent 12.1. The turbulence in the liquid phase is described using κ- ε the model. Global hydrodynamic parameters like gas holdup, gas velocity and liquid velocity have been investigated for a range of superficial gas velocities, both with 2D and 3D simulations. Moreover, the study of geometry and scale influence on the reactor have been considered. The results suggest that both, geometry and scale have significant effects on the hydrodynamic parameters, which may have substantial effects on the reactor performance. Grid refinement and time-step size effect have been discussed. Numerical calculations with gas-liquid-solid three-phase flow system have been carried out to investigate the effect of solid loading, solid particle size and solid density on the hydrodynamic characteristics of internal loop airlift reactor with different superficial gas velocities. It was observed that averaged gas holdup is significantly decreased with increasing slurry concentration. Simulations show that the riser gas holdup decreases with increase in solid particle diameter. In addition, it was found that the averaged solid holdup increases in the riser section with the increase of solid density. These produced results reveal that CFD have excellent potential to simulate two-phase and three-phase flow system.
Resumo:
Transitional flow past a three-dimensional circular cylinder is a widely studied phenomenon since this problem is of interest with respect to many technical applications. In the present work, the numerical simulation of flow past a circular cylinder, performed by using a commercial CFD code (ANSYS Fluent 12.1) with large eddy simulation (LES) and RANS (κ - ε and Shear-Stress Transport (SST) κ - ω! model) approaches. The turbulent flow for ReD = 1000 & 3900 is simulated to investigate the force coefficient, Strouhal number, flow separation angle, pressure distribution on cylinder and the complex three dimensional vortex shedding of the cylinder wake region. The numerical results extracted from these simulations have good agreement with the experimental data (Zdravkovich, 1997). Moreover, grid refinement and time-step influence have been examined. Numerical calculations of turbulent cross-flow in a staggered tube bundle continues to attract interest due to its importance in the engineering application as well as the fact that this complex flow represents a challenging problem for CFD. In the present work a time dependent simulation using κ – ε, κ - ω! and SST models are performed in two dimensional for a subcritical flow through a staggered tube bundle. The predicted turbulence statistics (mean and r.m.s velocities) have good agreement with the experimental data (S. Balabani, 1996). Turbulent quantities such as turbulent kinetic energy and dissipation rate are predicted using RANS models and compared with each other. The sensitivity of grid and time-step size have been analyzed. Model constants sensitivity study have been carried out by adopting κ – ε model. It has been observed that model constants are very sensitive to turbulence statistics and turbulent quantities.
Resumo:
Tutkimukseni käsittelee keskiajan nousua historian tärkeäksi periodiksi 1700-luvun Englannissa näkökulmanaan Thomas Wartonin (1726–1790) kirjoitukset. Warton oli Oxfordin yliopistossa toiminut oppinut antikvaari. Wartonin pääteos History of English Poetry (1774–1781) ei nimestään huolimatta ollut moderni kirjallisuushistoria vaan 1000–1500-luvun kirjoitettua kulttuuria laajasti käsitellyt vernakulaariin kirjallisuuteen pohjautunut esitys. Warton ja hänen lähipiiriinsä kuuluneet tutkijat tarjoavat erityisen mahdollisuuden tarkastella, miten käsitys keskiajasta omana aikakautenaanmuodostui 1700-luvun lopulla. Tutkin Wartonin ja hänen aikalaistensa toisen vuosituhannen alusta kirjoittamia arvioita Michel de Certeaun historiografisen operaation käsitteen avulla. Se koostuu kolmesta vaiheesta: Alue määrittelee sosiaaliset riippuvuussuhteet ja vaikuttimet, jotka ohjaavat tutkimusta. Käytäntö viittaa siihen, miten historioitsija valitsee materiaalinsa ja muokkaa siitä historiankirjoituksena hahmottuvan kokonaisuuden. Lopuksi kirjoitus konkreettisena ja fyysisenä ilmiönä luo illuusion lopullisuudesta ja huonosti sopivien osien yhteenkuuluvuudesta. de Certeaun teoria soveltuu paremmin vanhemman historiankirjoituksen ja oppineisuuden tarkasteluun kuin historiantutkimuksen narratiiveja analysoivat teokset, koska se kontekstualisoi laajemmin historiankirjoitukseen vaikuttavat ilmiöt. Thomas Warton ja muut 1700-luvun puolivälin oppineet määrittelivät keskiajan fiktiivisten tekstien avulla. Warton tutustui tarkasti romansseihin ja kronikoihin. Erityisen suuri merkitys keskiajan hahmottamisen kannalta oli Geoffrey Monmouthilaisen kronikalla Historia regum britanniae, joka esitteli Englannin myyttisen varhaisen historian yhdistämällä Rooman ja oman kansallisen perinteen. Geoffreyn kronikan avulla Warton huomasi keskiaikaisten tarinoiden laajan vaikutuksen; hän kirjoitti erityisesti kuningas Arthuriin liittyneiden kertomusten merkityksestä, joka jatkui aina 1500-luvulle asti. Näin Warton löysi antiikin perinteelle haastajan keskiaikaisesta kulttuurista. Wartonin tapa esitellä keskiaikaa perustui osittain valistusajan sulavasti kirjoitetuille universaalihistorioille, osittain oppineelle luettelomaiselle esitystavalle. Käytännössä Wartonin pitkät johdantotutkielmat kuitenkin johdattavat lukijaa keskeisiin teemoihin. Niitä ovat mielikuvituksen väheneminen uusimmassa kirjallisuudessa ja toisaalta hienostuneisuuden ja tiedon kasvu. Warton ei missään vaiheessa kerro, liittyvätkö nämä teemat yhteen, mutta tulkintani mukaan ne liittyivät. Warton ajatteli kirjallisuuden menettäneen olennaisen mielikuvituksen samaan aikaan, kun yhteiskunta oli kehittynyt. Tämä auttaa hahmottamaan koko kirjallisuuden historiaa: Warton etsi alkuperäistä mielikuvitusta niin antiikin Kreikasta, Orientista kuin muinaisesta Skandinaviasta. History of English Poetry ei pohtinut vain kirjallisuuden ja yhteiskunnan suhdetta, sillä Warton ajatteli voivansa tutkia keskiajan yhteiskuntaa kronikoiden ja romanssien avulla. Hänen käsityksensä feodalismista, hovien elämästä ja keskiaikaisista tavoista perustuivat niihin. Warton ei huomannut, että hänen käyttämänsä lähteet olivat tietoisia kirjallisia konstruktioita vaan hän piti niitä totuudenmukaisina kuvauksina. Toisaalta Wartonin tulkintaan heijastuivat myös 1700-luvun käsitykset yhteiskunnasta. Keskiajan lähteiden kuvaukset ja 1700-luvun ideaalit vaikuttivat lopulta siihen, millaiseksi populaari kuva keskiajasta kehittyi.
Resumo:
The purpose of this Thesis was to comprehensively analyze and develop the spare part business in Company Oy’s five biggest product groups by searching development issues related to single spare parts’ supply chains as well as the spare part business process, make implementation plans for them and implement the plans when possible. The items were classified based on special characteristics of spare parts and on their actual sales volumes. The created item classes were examined for finding improvement possibilities. Management strategies for classified items were suggested. Vendors and customers were analyzed for supporting the comprehensive supply network development work. The effectiveness of the current spare part business process was analyzed in co-operation with the spare part teams in three business unit locations. Several items were taken away from inventories as uselessly stocked items. Price list related to core items with one of the main product group’s core item manufacturer was suggested to be expanded in Town A. Refinement equipment seal item supply chain management was seen important to develop in Town B. A new internal business process model was created for minimizing and enhancing the internal business between Company’s business units. SAP inventory reports and several other features were suggested to be changed or developed. Also the SAP data material management was seen very important to be developed continuously. Many other development issues related to spare parts’ supply chains and the work done in the business process were found. The need for investigating the development possibilities deeper became very clear during the project.