892 resultados para analytic
Resumo:
Wind power is a low-carbon energy production form that reduces the dependence of society on fossil fuels. Finland has adopted wind energy production into its climate change mitigation policy, and that has lead to changes in legislation, guidelines, regional wind power areas allocation and establishing a feed-in tariff. Wind power production has indeed boosted in Finland after two decades of relatively slow growth, for instance from 2010 to 2011 wind energy production increased with 64 %, but there is still a long way to the national goal of 6 TWh by 2020. This thesis introduces a GIS-based decision-support methodology for the preliminary identification of suitable areas for wind energy production including estimation of their level of risk. The goal of this study was to define the least risky places for wind energy development within Kemiönsaari municipality in Southwest Finland. Spatial multicriteria decision analysis (SMCDA) has been used for searching suitable wind power areas along with many other location-allocation problems. SMCDA scrutinizes complex ill-structured decision problems in GIS environment using constraints and evaluation criteria, which are aggregated using weighted linear combination (WLC). Weights for the evaluation criteria were acquired using analytic hierarchy process (AHP) with nine expert interviews. Subsequently, feasible alternatives were ranked in order to provide a recommendation and finally, a sensitivity analysis was conducted for the determination of recommendation robustness. The first study aim was to scrutinize the suitability and necessity of existing data for this SMCDA study. Most of the available data sets were of sufficient resolution and quality. Input data necessity was evaluated qualitatively for each data set based on e.g. constraint coverage and attribute weights. Attribute quality was estimated mainly qualitatively by attribute comprehensiveness, operationality, measurability, completeness, decomposability, minimality and redundancy. The most significant quality issue was redundancy as interdependencies are not tolerated by WLC and AHP does not include measures to detect them. The third aim was to define the least risky areas for wind power development within the study area. The two highest ranking areas were Nordanå-Lövböle and Påvalsby followed by Helgeboda, Degerdal, Pungböle, Björkboda, and Östanå-Labböle. The fourth aim was to assess the recommendation reliability, and the top-ranking two areas proved robust whereas the other ones were more sensitive.
Resumo:
Vernacular Expressions and Analytic Categories – The 3rd International Conference of Young Folklorists, Tartossa 14.–16.5.2013.
Resumo:
Suomalaisten ja saksalaisten arkikeskustelujen välillä on sekä yhtäläisyyksiä että eroja. Tässä saksalaisen filologian alaan kuuluvassa tutkimuksessa tarkastellaan yhtä keskeistä arkikeskustelun toimintoa, puhelinkeskustelun lopetusta, suomen- ja saksanpuhujien tuottamana. Aineistona on käytetty suomen- ja saksankielisten äidinkielisten puhujien tätä tutkimusta varten nauhoittamia henkilökohtaisia luonnollisia puhelinkeskusteluja. Aineistoon valikoitui 12 suomalaista ja 12 saksalaista puhelua. Nauhoitteiden käyttöön on saatu asianmukainen lupa kaikilta osapuolilta. Puhelut on litteroitu saksalaisella kielialueella vakiintuneen GAT-litterointisysteemin mukaan. Teoreettis-metodisena kehyksenä on kaksi tutkimusalaa, vuorovaikutuslingvistiikka ja kielten vertailu. Vuorovaikutuslingvistinen tarkastelu keskittyy havaintoihin vuorojen ja puheen sekvenssien rakenteesta. Vuorojen merkitysten tulkinnassa hyödynnetään systemaattisesti prosodian antamia vihjeitä. Tuloksena on yksittäisten lopetusten keskustelunanalyyttinen lähikuvaus, jonka pohjalta määritellään kulloisenkin lopetuksen sekvenssirakenne. Kaikki lopetukset olivat siltä osin yhteneväisiä, että niissä kaikissa havaittiin ainakin aloittava, tulevaan tapaamiseen viittaava sekä lopputervehdyksiin johtava sekvenssi. Sekvenssirakenteen variaatioiden pohjalta aineiston lopetukset voidaan kuitenkin jaotella ryhmiin. Sekä suomen- että saksankielisessä aineistossa havaittiin kolmentyyppisiä lopetuksia: kompakteja, komplekseja ja keskeytettyjä lopetuksia. Ryhmittely kolmeen tyyppiin on avuksi seuraavassa kuvausvaiheessa, jossa verrataan suomen- ja saksankielisiä lopetuksia toisiinsa. Samanaikaisesti kun tutkimus valottaa kohtia, joissa kaksi aineistosettiä yhtenevät ja eroavat, se myös esittää, mitkä vuorovaikutuksen tasot soveltuvat kieltenvälisen vertailun kohteiksi. Pohdintaa siitä, mitä vuorovaikutuksen tasoja kieltenväliseen vertailuun voidaan sisällyttää, onkin toistaiseksi esitetty verrattain vähän. Työ siis rakentaa siltaa vuorovaikutuslingvistisen ja kontrastiivisen kielitieteen välille.
Resumo:
In this Thesis, we study various aspects of ring dark solitons (RDSs) in quasi-two-dimensional toroidally trapped Bose-Einstein condensates, focussing on atomic realisations thereof. Unlike the well-known planar dark solitons, exact analytic expressions for RDSs are not known. We address this problem by presenting exact localized soliton-like solutions to the radial Gross-Pitaevskii equation. To date, RDSs have not been experimentally observed in cold atomic gases, either. To this end, we propose two protocols for their creation in experiments. It is also currently well known that in dimensions higher than one, (ring) dark solitons are susceptible, in general, to an irreversible decay into vortex-antivortex pairs through the snake instability. We show that the snake instability is caused by an unbalanced quantum pressure across the soliton's notch, linking the instability to the Bogoliubov-de Gennes spectrum. In particular, if the angular symmetry is maintained (or the toroidal trapping is restrictive enough), we show that the RDS is stable (long-lived with a lifetime of order seconds) in two dimensions. Furthermore, when the decay does take place, we show that the snake instability can in fact be reversible, and predict a previously unknown revival phenomenon for the original (many-)RDS system: the soliton structure is recovered and all the point-phase singularities (i.e. vortices) disappear. Eventually, however, the decay leads to an example of quantum turbulence; a quantum example of the laminar-to-turbulent type of transition.
Resumo:
Objective of the study The aim of this study is to understand the institutional implications in Abenomics in a spatial context, the contemporary economic reform taking place in Japan, which is to finally end over two decades of economic malaise. For theoretical perspective of choice, this study explores a synthesis of institutionalism as the main approach, complemented by economies of agglomeration in spatial economics, or New Economic Geography (NEG). The outcomes include a narrative with implications for future research, as well as possible future implications for the economy of Japan, itself. The narrative seeks to depict the dialogue between public discourse and governmental communication in order to create a picture of how this phenomenon is being socially constructed. This is done by studying the official communications by the Cabinet along with public media commentary on respective topics. The reform is studied with reference to historical socio-cultural, economic evolution of Japan, which in turn, is explored through a literature review. This is to assess the unique institutional characteristics of Japan pertinent to reform. Research method This is a social and exploratory qualitative study – an institutional narrative case study. The methodological approach was kept practical: in addition to literature review, a narrative, thematic content analysis with structural emphasis was used to construct the contemporary narrative based on the Cabinet communication. This was combined with practical analytic tools borrowed from critical discourse analysis, which were utilized to assess the implicit intertextual agenda within sources. Findings What appears to characterize the discourse is status quo bias that comes in multiple forms. The bias is also coded in the institutions surrounding the reform, wherein stakeholders have vested interests in protecting the current state of affairs. This correlates with uncertainty avoidance characteristic to Japan. Japan heeds the international criticism to deregulate on a rhetorical level, but consistent with history, the Cabinet solutions appear increasingly bureaucratic. Hence, the imposed western information-age paradigm of liberal cluster agglomeration seems ill-suited to Japan which lacks risk takers and a felicitous entrepreneur culture. The Japanese, however, possess vast innovative potential ascribed to some institutional practices and traits, but restrained by others. The derived conclusion is to study the successful intrapreneur cases in Japanese institutional setting as a potential benchmark for Japan specific cluster agglomeration, and a solution to its structural problems impeding growth.
Resumo:
Abstract This doctoral thesis concerns the active galactic nucleus (AGN) most often referred to with the catalogue number OJ287. The publications in the thesis present new discoveries of the system in the context of a supermassive binary black hole model. In addition, the introduction discusses general characteristics of the OJ287 system and the physical fundamentals behind these characteristics. The place of OJ287 in the hierarchy of known types of AGN is also discussed. The introduction presents a large selection of fundamental physics required to have a basic understanding of active galactic nuclei, binary black holes, relativistic jets and accretion disks. Particularly the general relativistic nature of the orbits of close binaries of supermassive black holes is explored with some detail. Analytic estimates of some of the general relativistic effects in such a binary are presented, as well as numerical methods to calculate the effects more precisely. It is also shown how these results can be applied to the OJ287 system. The binary orbit model forms the basis for models of the recurring optical outbursts in the OJ287 system. In the introduction, two physical outburst models are presented in some detail and compared. The radiation hydrodynamics of the outbursts are discussed and optical light curve predictions are derived. The precursor outbursts studied in Paper III are also presented, and tied into the model of OJ287. To complete the discussion of the observable features of OJ287, the nature of the relativistic jets in the system, and in active galactic nuclei in general, is discussed. Basic physics of relativistic jets are presented, with additional detail added in the form of helical jet models. The results of Papers II, IV and V concerning the jet of OJ287 are presented, and their relation to other facets of the binary black hole model is discussed. As a whole, the introduction serves as a guide, though terse, for the physics and numerical methods required to successfully understand and simulate a close binary of supermassive black holes. For this purpose, the introduction necessarily combines a large number of both fundamental and specific results from broad disciplines like general relativity and radiation hydrodynamics. With the material included in the introduction, the publications of the thesis, which present new results with a much narrower focus, can be readily understood. Of the publications, Paper I presents newly discovered optical data points for OJ287, detected on archival astronomical plates from the Harvard College Observatory. These data points show the 1900 outburst of OJ287 for the first time. In addition, new data points covering the 1913 outburst allowed the determination of the start of the outburst with more precision than was possible before. These outbursts were then successfully numerically modelled with an N-body simulation of the OJ287 binary and accretion disc. In Paper II, mechanisms for the spin-up of the secondary black hole in OJ287 via interaction with the primary accretion disc and the magnetic fields in the system are discussed. Timescales for spin-up and alignment via both processes are estimated. It is found that the secondary black hole likely has a high spin. Paper III reports a new outburst of OJ287 in March 2013. The outburst was found to be rather similar to the ones reported in 1993 and 2004. All these outbursts happened just before the main outburst season, and are called precursor outbursts. In this paper, a mechanism was proposed for the precursor outbursts, where the secondary black hole collides with a gas cloud in the primary accretion disc corona. From this, estimates of brightness and timescales for the precursor were derived, as well as a prediction of the timing of the next precursor outburst. In Paper IV, observations from the 2004–2006 OJ287 observing program are used to investigate the existence of short periodicities in OJ287. The existence of a _50 day quasiperiodic component is confirmed. In addition, statistically significant 250 day and 3.5 day periods are found. Primary black hole accretion of a spiral density wave in the accretion disc is proposed as the source of the 50 day period, with numerical simulations supporting these results. Lorentz contracted jet re-emission is then proposed as the reason for the 3.5 day timescale. Paper V fits optical observations and mm and cm radio observations of OJ287 with a helical jet model. The jet is found to have a spine–sheath structure, with the sheath having a much lower Lorentz gamma factor than the spine. The sheath opening angle and Lorentz factor, as well as the helical wavelength of the jet are reported for the first time. Tiivistelmä Tässä väitöskirjatutkimuksessa on keskitytty tutkimaan aktiivista galaksiydintä OJ287. Väitöskirjan osana olevat tieteelliset julkaisut esittelevät OJ287-systeemistä saatuja uusia tuloksia kaksoismusta-aukkomallin kontekstissa. Väitöskirjan johdannossa käsitellään OJ287:n yleisiä ominaisuuksia ja niitä fysikaalisia perusilmiöitä, jotka näiden ominaisuuksien taustalla vaikuttavat. Johdanto selvittää myös OJ287-järjestelmän sijoittumisen aktiivisten galaksiytimien hierarkiassa. Johdannossa käydään läpi joitakin perusfysiikan tuloksia, jotka ovat tarpeen aktiivisten galaksiydinten, mustien aukkojen binäärien, relativististen suihkujen ja kertymäkiekkojen ymmärtämiseksi. Kahden toisiaan kiertävän mustan aukon keskinäisen radan suhteellisuusteoreettiset perusteet käydään läpi yksityiskohtaisemmin. Johdannossa esitetään joitakin analyyttisiä tuloksia tällaisessa binäärissä havaittavista suhteellisuusteoreettisista ilmiöistä. Myös numeerisia menetelmiä näiden ilmiöiden tarkempaan laskemiseen esitellään. Tuloksia sovelletaan OJ287-systeemiin, ja verrataan havaintoihin. OJ287:n mustien aukkojen ratamalli muodostaa pohjan systeemin toistuvien optisten purkausten malleille. Johdannossa esitellään yksityiskohtaisemmin kaksi fysikaalista purkausmallia, ja vertaillaan niitä. Purkausten säteilyhydrodynamiikka käydään läpi, ja myös ennusteet purkausten valokäyrille johdetaan. Johdannossa esitellään myös Julkaisussa III johdettu prekursoripurkausten malli, ja osoitetaan sen sopivan yhteen OJ287:n binäärimallin kanssa. Johdanto esittelee myös relativististen suihkujen fysiikkaa sekä OJ287- systeemiin liittyen että aktiivisten galaksiydinten kontekstissa yleisesti. Relativististen suihkujen perusfysiikka esitellään, kuten myös malleja kierteisistä suihkuista. Julkaisujen II, IV ja V OJ287-systeemin suihkuja koskevat tulokset esitellään binäärimallin kontekstissa. Kokonaisuutena johdanto palvelee suppeana oppaana, joka esittelee tarvittavan fysiikan ja tarpeelliset numeeriset menetelmät mustien aukkojen binäärijärjestelmän ymmärtämiseen ja simulointiin. Tätä tarkoitusta varten johdanto yhdistää sekä perustuloksia että joitakin syvällisempiä tuloksia laajoilta fysiikan osa-alueilta kuten suhteellisuusteoriasta ja säteilyhydrodynamiikasta. Johdannon sisältämän materiaalin avulla väitöskirjan julkaisut, ja niiden esittämät tulokset, ovat hyvin ymmärrettävissä. Väitöskirjan julkaisuista ensimmäinen esittelee uusia OJ287-systeemistä saatuja havaintopisteitä, jotka on paikallistettu Harvardin yliopiston observatorion arkiston valokuvauslevyiltä. OJ287:n vuonna 1900 tapahtunut purkaus nähdään ensimmäistä kertaa näissä havaintopisteissä. Uudet havaintopisteet mahdollistivat myös vuoden 1913 purkauksen alun ajoittamisen tarkemmin kuin aiemmin oli mahdollista. Havaitut purkaukset mallinnettiin onnistuneesti simuloimalla OJ287-järjestelmän mustien aukkojen paria ja kertymäkiekkoa. Julkaisussa II käsitellään mekanismeja OJ287:n sekundäärisen mustan aukon spinin kasvamiseen vuorovaikutuksessa primäärin kertymäkiekon ja systeemin magneettikenttien kanssa. Julkaisussa arvioidaan maksimispinin saavuttamisen ja spinin suunnan vakiintumisen aikaskaalat kummallakin mekanismilla. Tutkimuksessa havaitaan sekundäärin spinin olevan todennäköisesti suuri. Julkaisu III esittelee OJ287-systeemissä maaliskuussa 2013 tapahtuneen purkauksen. Purkauksen havaittiin muistuttavan vuosina 1993 ja 2004 tapahtuneita purkauksia, joita kutsutaan yhteisnimityksellä prekursoripurkaus (precursor outburst). Julkaisussa esitellään purkauksen synnylle mekanismi, jossa OJ287-systeemin sekundäärinen musta aukko osuu primäärisen mustan aukon kertymäkiekon koronassa olevaan kaasupilveen. Mekanismin avulla johdetaan arviot prekursoripurkausten kirkkaudelle ja aikaskaalalle. Julkaisussa johdetaan myös ennuste seuraavan prekursoripurkauksen ajankohdalle. Julkaisussa IV käytetään vuosina 2004–2006 kerättyjä havaintoja OJ287- systeemistä lyhyiden jaksollisuuksien etsintään. Julkaisussa varmennetaan systeemissä esiintyvä n. 50 päivän kvasiperiodisuus. Lisäksi tilastollisesti merkittävät 250 päivän ja 3,5 päivän jaksollisuudet havaitaan. Julkaisussa esitetään malli, jossa primäärisen mustan aukon kertymäkiekossa oleva spiraalitiheysaalto aiheuttaa 50 päivän jaksollisuuden. Mallista tehty numeerinen simulaatio tukee tulosta. Systeemin relativistisen suihkun emittoima aikadilatoitunut säteily esitetään aiheuttajaksi 3,5 päivän jaksollisuusaikaskaalalle. Julkaisussa V sovitetaan kierresuihkumalli OJ287-systeemistä tehtyihin optisiin havaintoihin ja millimetri- sekä senttimetriaallonpituuden radiohavaintoihin. Suihkun rakenteen havaitaan olevan kaksijakoinen ja koostuvan ytimestä ja kuoresta. Suihkun kuorella on merkittävästi pienempi Lorentzin gamma-tekijä kuin suihkun ytimellä. Kuoren avautumiskulma ja Lorentztekijä sekä suihkun kierteen aallonpituus raportoidaan julkaisussa ensimmäistä kertaa.
Resumo:
Service provider selection has been said to be a critical factor in the formation of supply chains. Through successful selection companies can attain competitive advantage, cost savings and more flexible operations. Service provider management is the next crucial step in outsourcing process after the selection has been made. Without proper management companies cannot be sure about the level of service they have bought and they may suffer from service provider's opportunistic behavior. In worst case scenario the buyer company may end up in locked-in situation in which it is totally dependent of the service provider. This thesis studies how the case company conducts its carrier selection process along with the criteria related to it. A model for the final selection is also provided. In addition, case company's carrier management procedures are reflected against recommendations from previous researches. The research was conducted as a qualitative case study on the principal company, Neste Oil Retail. A literature review was made on outsourcing, service provider selection and service provider management. On the basis of the literature review, this thesis ended up recommending Analytic hierarchy process as the preferred model for the carrier selection. Furthermore, Agency theory was seen to be a functional framework for carrier management in this study. Empirical part of this thesis was conducted in the case company by interviewing the key persons in the selection process, making observations and going through documentations related to the subject. According to the results from the study, both carrier selection process as well as carrier management were closely in line with suggestions from literature review. Analytic hierarchy process results revealed that the case company considers service quality as the most important criteria with financial situation and price of service following behind with almost identical weights with each other. Equipment and personnel was seen as the least important selection criterion. Regarding carrier management, the study resulted in the conclusion that the company should consider engaging more in carrier development and working towards beneficial and effective relationships. Otherwise, no major changes were recommended for the case company processes.
Resumo:
Collective Action in Commons: Its Diverse Ends and Consequences explores new ways in which collective action theories can contribute to our understanding of natural resource management, especially the management of common-pools. Combining classical collective action theories and lessons from earlier empirical works, the study shows that cooperation among resource users is not only a possible solution to “the tragedy of the commons”, but it can be a part of the problem as well. That is, successful cooperation may increase the likelihood of resource depletion, for example, through more effective resource utilization or collusion against sanctioning and monitoring systems. The study also explores how analytic narratives can be used to tell the story behind problems of resource use and their solutions, including the diverse roles of cooperation.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Cloud Computing paradigm is continually evolving, and with it, the size and the complexity of its infrastructure. Assessing the performance of a Cloud environment is an essential but strenuous task. Modeling and simulation tools have proved their usefulness and powerfulness to deal with this issue. This master thesis work contributes to the development of the widely used cloud simulator CloudSim and proposes CloudSimDisk, a module for modeling and simulation of energy-aware storage in CloudSim. As a starting point, a review of Cloud simulators has been conducted and hard disk drive technology has been studied in detail. Furthermore, CloudSim has been identified as the most popular and sophisticated discrete event Cloud simulator. Thus, CloudSimDisk module has been developed as an extension of CloudSim v3.0.3. The source code has been published for the research community. The simulation results proved to be in accordance with the analytic models, and the scalability of the module has been presented for further development.
Improving oral healthcare in Scotland with special reference to sustainability and caries prevention
Resumo:
Brett Duane Improving oral healthcare in Scotland with special reference to sustainability and caries prevention University of Turku, Faculty of Medicine, Institute of Dentistry, Community Dentistry, Finnish Doctoral Program in Oral Sciences (FINDOS-Turku), Turku, Finland Annales Universitatis Turkuensis, Sarja- Ser. D, Medica-Odontologica. Painosalama Oy, Turku, Finland, 2015. Dentistry must provide sustainable, evidence-based, and prevention-focused care. In Scotland oral health prevention is delivered through the Childsmile programme, with an increasing use of high concentration fluoride toothpaste (HCFT). Compared with other countries there is little knowledge of xylitol prevention. The UK government has set strict carbon emission limits with which all national health services (NHS) must comply. The purpose of these studies was firstly to describe the Scottish national oral health prevention programme Childsmile (CS), to determine if the additional maternal use of xylitol (CS+X) was more effective at affecting the early colonisation of mutans streptococci (MS) than this programme alone; secondly to analyse trends in the prescribing and management of HCFT by dentists; and thirdly to analyse data from a dental service in order to improve its sustainability. In all, 182 mother/child pairs were selected on the basis of high maternal MS levels. Motherswere randomly allocated to a CS or CS+X group, with both groups receiving Childsmile. Theintervention group consumed xylitol three times a day, from when the child was 3 months until 24 months. Children were examined at age two to assess MS levels. In order to understand patterns of HCFT prescribing, a retrospective secondary data analysis of routine prescribing data for the years 2006-2012 was performed. To understand the sustainability of dental services, carbon accounting combined a top-down approach and a process analysis approach, followed by the use of Pollard’s decision model (used in other healthcare areas) to analyse and support sustainable service reconfiguration. Of the CS children, 17% were colonised with MS, compared with 5% of the CS+X group. This difference was not statistically significant (P=0.1744). The cost of HCFT prescribing increased fourteen-fold over five years, with 4% of dentists prescribing 70% of the total product. Travel (45%), procurement (36%) and building energy (18%) all contributed to the 1800 tonnes of carbon emissions produced by the service, around 4% of total NHS emissions. Using the analytical model, clinic utilisation rates improved by 56% and patient travel halved significantly reducing carbon emissions. It can be concluded that the Childsmile programme was effective in reducing the risk for MS transmission. HCFT is increasing in Scotland and needs to be managed. Dentistry has similar carbon emissions proportionally as the overall NHS, and the use of an analytic tool can be useful in helping identify these emissions. Key words: Sustainability, carbon emissions, xylitol, mutans streptococci, fluoride toothpaste, caries prevention.
Resumo:
The aim of this dissertation was to examine the skills and knowledge that pre-service teachers and teachers have and need about working with multilingual and multicultural students from immigrant backgrounds. The specific goals were to identify pre-service teachers’ and practising teachers’ current knowledge and awareness of culturally and linguistically responsive teaching, identify a profile of their strengths and needs, and devise appropriate professional development support and ways to prepare teachers to become equitable culturally responsive practitioners. To investigate these issues, the dissertation reports on six original empirical studies within two groups of teachers: international pre-service teacher education students from over 25 different countries as well as pre-service and practising Finnish teachers. The international pre-service teacher sample consisted of (n = 38, study I; and n = 45, studies II-IV) and the pre-service and practising Finnish teachers sample encompassed (n = 89, study V; and n = 380, study VI). The data used were multi-source including both qualitative (students’ written work from the course including journals, final reflections, pre- and post-definition of key terms, as well as course evaluation and focus group transcripts) and quantitative (multi-item questionnaires with open-ended options), which enhanced the credibility of the findings resulting in the triangulation of data. Cluster analytic procedures, multivariate analysis of variance (MANOVA), and qualitative analyses mostly Constant Comparative Approach were used to understand pre-service teachers’ and practising teachers’ developing cultural understandings. The results revealed that the mainly white / mainstream teacher candidates in teacher education programmes bring limited background experiences, prior socialisation, and skills about diversity. Taking a multicultural education course where identity development was a focus, positively influenced teacher candidates’ knowledge and attitudes toward diversity. The results revealed approaches and strategies that matter most in preparing teachers for culturally responsive teaching, including but not exclusively, small group activities and discussions, critical reflection, and field immersion. This suggests that there are already some tools to address the need for the support needed to teach successfully a diversity of pupils and provide in-service training for those already practising the teaching profession. The results provide insight into aspects of teachers’ knowledge about both the linguistic and cultural needs of their students, as well as what constitutes a repertoire of approaches and strategies to assure students’ academic success. Teachers’ knowledge of diversity can be categorised into sound awareness, average awareness, and low awareness. Knowledge of diversity was important in teachers’ abilities to use students’ language and culture to enhance acquisition of academic content, work effectively with multilingual learners’ parents/guardians, learn about the cultural backgrounds of multilingual learners, link multilingual learners’ prior knowledge and experience to instruction, and modify classroom instruction for multilingual learners. These findings support the development of a competency based model and can be used to frame the studies of pre-service teachers, as well as the professional development of practising teachers in increasingly diverse contexts. The present set of studies take on new significance in the current context of increasing waves of migration to Europe in general and Finland in particular. They suggest that teacher education programmes can equip teachers with the necessary attitudes, skills, and knowledge to enable them work effectively with students from different ethnic and language backgrounds as they enter the teaching profession. The findings also help to refine the tools and approaches to measuring the competencies of teachers teaching in mainstream classrooms and candidates in preparation.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
The ecological and evolutionary economics of Georgescu-Roegen. The main argument of this paper is that Georgescu-Roegen's contributions represent a major disruption with economics' pre-analytic vision. He rejected at the same time both the closed and circular view of the economy and the mechanic analogies that oriented economics in the past century. Even though his influence has been felt mainly in the field of ecological economics, his epistemological contributions represent a major challenge to equilibrium thinking. Nowadays, treating economic systems as complex and evolutionary systems is becoming not only acceptable, but also a trend in the way political economy is made. We defend that Georgescu-Roegen's disruption represents a scientific revolution in economics, in the sense attributed by Kuhn.