16 resultados para Domain of Variability

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy(CADASIL) is the most common hereditary small vessel disease (SVD) leading to vascular dementia. The cause of the disease is mutations in NOTCH3 gene located at chromosome 19p13.1. The gene defect results in accumulation of granular osmiophilic material and extracellular domain of NOTCH3 at vascular smooth muscle cells (VSMCs) with subsequent degeneration of VSMCs. This arteriopathy leads to white matter (WM) rarefaction and multiple lacunar infarctions in both WM and deep grey matter (GM) visible in magnetic resonance imaging. This thesis is focused on the quantitative morphometric analysis of the stenosis and fibrosis in arterioles of the frontal cerebral WM, cortical GM and deep GM (lenticular nucleus (LN), i.e. putamen and globus pallidus). It was performed by assessing four indicators of arteriolar stenosis and fibrosis: (1) diameter of arteriolar lumen, (2) thickness of arteriolar wall, (3) external diameter of arterioles and (4) sclerotic index. These parameters were assessed (a) in 5 elderly CADASIL patients with the mean age of onset 47 years and of death 63 years, (b) in a 32-year-old young CADASIL patient with the first ischemic episode at the age of 29 years and (c) a very old CADASIL patient aged 95 years, who suffered the first stroke at the age of 71 years. These measurements were compared with age-matched controls without stroke, dementia, hypertension, and cerebral amyloid angiopathy. Morphometric analyses disclosed that in all age groups of CADASIL patients compared to corresponding controls there was significant narrowing of arteriolar lumen (stenosis) and fibrotic thickening of the walls (fibrosis) in the WM arterioles, although the significance of stenosis in the very old patient was marginal. In the LN arterioles there was only significant fibrosis without stenosis. These results suggest that the ischemic lesions and lacunar infarcts in the cerebral WM are mainly attributable to the stenosis of arterioles, whereas those in the LN are probably mainly due to hemodynamic changes of the cerebral blood flow. In conclusion: The SVD of CADASIL is characterized by narrowing of lumina and fibrotic thickening of walls predominantly in the cerebral WM arterioles. On the other hand, in the LN the ischemic lesions and lacunar infarcts are most probably hemodynamic due to impaired autoregulation caused by the rigidity of fibrotic arterioles. The pathological cerebral arteriolar alterations begin to develop already at a relatively young age but the onset may be delayed to a remarkably old age. This underlines the well known great variability in the clinical picture of CADASIL. The very late onset of CADASIL may cause its underdiagnosis, because the strokes are common in the elderly and are attributed to common risk factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cardiac failure is one of the leading causes of mortality in developed countries. As life expectancies of the populations of these countries grow, the number of patients suffering from cardiac insufficiency also increase. Effective treatments including the use of calcium sensitisers are being sought. They cause a positive inodilatory effect on cardio-myocytes without deleterious effects (arrhythmias) resulting from increases in intracellular calcium concentration. Levosimendan is a novel calcium sensitiser that hasbeen proved to be a welltolerated and effective treatment for patients with severe decompensated heart failure. Cardiac troponin C (cTnC) is its target protein. However, there have been controversies about the interactions between levosimendan and cTnC. Some of these controversies have been addressed in this dissertation. Furthermore, studies on the calcium sensitising mechanism based on the interactions between levosimendan and cTnC as followed by nuclear magnetic resonance(NMR) are presented and discussed. Levosimendan was found to interact with bothdomains of the calcium-saturated cTnC in the absence of cardiac troponin I (cTnI). In the presence of cTnI, the C-domain binding site was blocked and levosimendan interacted only with the regulatory domain of cTnC. This interaction may have caused the observed calcium sensitising effect by priming the N-domain for cTnI binding thereby extending the lifetime of that complex. It is suggested that this is achieved by shifting the equilibrium between open and closed conformations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Suomenlahden lisääntynyt meriliikenne on herättänyt huolta meriliikenteen turvallisuuden tasosta, ja erityisesti Venäjän öljyviennin kasvu on lisännyt öljyonnettomuuden todennäköisyyttä Suomenlahdella. Erilaiset kansainväliset, alueelliset ja kansalliset ohjauskeinot pyrkivät vähentämään merionnettomuuden riskiä ja meriliikenteen muita haittavaikutuksia. Tämä raportti käsittelee meriturvallisuuden yhteiskunnallisia ohjauskeinoja: ohjauskeinoja yleisellä tasolla, meriturvallisuuden keskeisimpiä säätelijöitä, meriturvallisuuden ohjauskeinoja ja meriturvallisuuspolitiikan tulevaisuuden näkymiä, ohjauskeinojen tehokkuutta ja nykyisen meriturvallisuuden ohjausjärjestelmän heikkouksia. Raportti on kirjallisuuskatsaus meriturvallisuuden yhteiskunnalliseen sääntelyn rakenteeseen ja tilaan erityisesti Suomenlahden meriliikenteen näkökulmasta. Raportti on osa tutkimusprojektia ”SAFGOF - Suomenlahden meriliikenteen kasvunäkymät 2007 - 2015 ja kasvun vaikutukset ympäristölle ja kuljetusketjujen toimintaan” ja sen työpakettia 6 ”Keskeisimmät riskit ja yhteiskunnalliset vaikutuskeinot”. Yhteiskunnalliset ohjauskeinot voidaan ryhmitellä hallinnollisiin, taloudellisiin ja tietoohjaukseen perustuviin ohjauskeinoihin. Meriturvallisuuden edistämisessä käytetään kaikkia näitä, mutta hallinnolliset ohjauskeinot ovat tärkeimmässä asemassa. Merenkulun kansainvälisen luonteen vuoksi meriturvallisuuden sääntely tapahtuu pääosin kansainvälisellä tasolla YK:n ja erityisesti Kansainvälisen merenkulkujärjestön (IMO) toimesta. Lisäksi myös Euroopan Unionilla on omaa meriturvallisuuteen liittyvää sääntelyä ja on myös olemassa muita alueellisia meriturvallisuuden edistämiseen liittyviä elimiä kuten HELCOM. Joitakin meriturvallisuuden osa-alueita säädellään myös kansallisella tasolla. Hallinnolliset meriturvallisuuden ohjauskeinot sisältävät aluksen rakenteisiin ja varustukseen, alusten kunnon valvontaan, merimiehiin ja merityön tekemiseen sekä navigointiin liittyviä ohjauskeinoja. Taloudellisiin ohjauskeinoihin kuuluvat esimerkiksi väylä- ja satamamaksut, merivakuutukset, P&I klubit, vastuullisuus- ja korvauskysymykset sekä taloudelliset kannustimet. Taloudellisten ohjauskeinojen käyttö meriturvallisuuden edistämiseen on melko vähäistä verrattuna hallinnollisten ohjauskeinojen käyttöön, mutta niitä voitaisiin varmasti käyttää enemmänkin. Ongelmana taloudellisten ohjauskeinojen käytössä on se, että ne kuuluvat pitkälti kansallisen sääntelyn piiriin, joten alueellisten tai kansainvälisten intressien edistäminen taloudellisilla ohjauskeinoilla voi olla hankalaa. Tieto-ohjaus perustuu toimijoiden vapaaehtoisuuteen ja yleisen tiedotuksen lisäksi tieto-ohjaukseen sisältyy esimerkiksi vapaaehtoinen koulutus, sertifiointi tai meriturvallisuuden edistämiseen tähtäävät palkinnot. Poliittisella tasolla meriliikenteen aiheuttamat turvallisuusriskit Suomenlahdella on otettu vakavasti ja paljon työtä tehdään eri tahoilla riskien minimoimiseksi. Uutta sääntelyä on odotettavissa etenkin liittyen meriliikenteen ympäristövaikutuksiin ja meriliikenteen ohjaukseen kuten meriliikenteen sähköisiin seurantajärjestelmiin. Myös inhimilliseen tekijän merkitykseen meriturvallisuuden kehittämisessä on kiinnitetty lisääntyvissä määrin huomiota, mutta inhimilliseen tekijän osalta tehokkaiden ohjauskeinojen kehittäminen näyttää olevan haasteellista. Yleisimmin lääkkeeksi esitetään koulutuksen kehittämistä. Kirjallisuudessa esitettyjen kriteereiden mukaan tehokkaiden ohjauskeinojen tulisi täyttää seuraavat vaatimukset: 1) tarkoituksenmukaisuus – ohjauskeinojen täytyy olla sopivia asetetun tavoitteen saavuttamiseen, 2) taloudellinen tehokkuus – ohjauskeinon hyödyt vs. kustannukset tulisi olla tasapainossa, 3) hyväksyttävyys – ohjauskeinon täytyy olla hyväksyttävä asianosaisten ja myös laajemman yhteiskunnan näkökulmasta katsottuna, 4) toimeenpano – ohjauskeinon toimeenpanon pitää olla mahdollista ja sen noudattamista täytyy pystyä valvomaan, 5) lateraaliset vaikutukset – hyvällä ohjauskeinolla on positiivisia seurannaisvaikutuksia muutoinkin kuin vain ohjauskeinon ensisijaisten tavoitteiden saavuttaminen, 6) kannustin ja uuden luominen – hyvä ohjauskeino kannustaa kokeilemaan uusia ratkaisuja ja kehittämään toimintaa. Meriturvallisuutta koskevaa sääntelyä on paljon ja yleisesti ottaen merionnettomuuksien lukumäärä on ollut laskeva viime vuosikymmenien aikana. Suuri osa sääntelystä on ollut tehokasta ja parantanut turvallisuuden tasoa maailman merillä. Silti merionnettomuuksia ja muita vaarallisia tapahtumia sattuu edelleen. Nykyistä sääntelyjärjestelmää voidaan kritisoida monen asian suhteen. Kansainvälisen sääntelyn aikaansaaminen ei ole helppoa: prosessi on yleensä hidas ja tuloksena voi olla kompromissien kompromissi. Kansainvälinen sääntely on yleensä reaktiivista eli ongelmakohtiin puututaan vasta kun jokin onnettomuus tapahtuu sen sijaan että se olisi proaktiivista ja pyrkisi puuttumaan ongelmakohtiin jo ennen kuin jotain tapahtuu. IMO:n työskentely perustuu kansallisvaltioiden osallistumiseen ja sääntelyn toimeenpano tapahtuu lippuvaltioiden toimesta. Kansallisvaltiot ajavat IMO:ssa pääasiallisesti omia intressejään ja sääntelyn toimeenpanossa on suuria eroja lippuvaltioiden välillä. IMO:n kyvyttömyys puuttua havaittuihin ongelmiin nopeasti ja ottaa sääntelyssä huomioon paikallisia olosuhteita on johtanut siihen, että esimerkiksi Euroopan Unioni on alkanut itse säädellä meriturvallisuutta ja että on olemassa sellaisia alueellisia erityisjärjestelyjä kuin PSSA (particularly sensitive sea area – erityisen herkkä merialue). Merenkulkualalla toimii monenlaisia yrityksiä: toisaalta yrityksiä, jotka pyrkivät toimimaan turvallisesti ja kehittämään turvallisuutta vielä korkeammalle tasolle, ja toisaalta yrityksiä, jotka toimivat niin halvalla kuin mahdollista, eivät välitä turvallisuusseikoista, ja joilla usein on monimutkaiset ja epämääräiset omistusolosuhteet ja joita vahingon sattuessa on vaikea saada vastuuseen. Ongelma on, että kansainvälisellä merenkulkualalla kaikkien yritysten on toimittava samoilla markkinoilla. Vastuuttomien yritysten toiminnan mahdollistavat laivaajat ja muut alan toimijat, jotka suostuvat tekemään yhteistyötä niiden kanssa. Välinpitämätön suhtautuminen turvallisuuteen johtuu osaksi myös merenkulun vanhoillisesta turvallisuuskulttuurista. Verrattaessa meriturvallisuuden sääntelyjärjestelmää kokonaisuutena tehokkaiden ohjauskeinoihin kriteereihin, voidaan todeta, että monien kriteerien osalta nykyistä järjestelmää voidaan pitää tehokkaana ja onnistuneena. Suurimmat ongelmat lienevät sääntelyn toimeenpanossa ja ohjauskeinojen kustannustehokkuudessa. Lippuvaltioiden toimeenpanoon perustuva järjestelmä ei toimi toivotulla tavalla, josta mukavuuslippujen olemassa olo on selvin merkki. Ohjauskeinojen, sekä yksittäisten ohjauskeinojen että vertailtaessa eri ohjauskeinoja keskenään, kustannustehokkuutta on usein vaikea arvioida, minkä seurauksena ohjauskeinojen kustannustehokkuudesta ei ole saatavissa luotettavaa tietoa ja tuloksena voi olla, että ohjauskeino on käytännössä pienen riskin eliminoimista korkealla kustannuksella. Kansainvälisen tason meriturvallisuus- (ja merenkulku-) politiikan menettelytavoiksi on ehdotettu myös muita vaihtoehtoja kuin nykyinen järjestelmä, esimerkiksi monitasoista tai polysentristä hallintojärjestelmää. Monitasoisella hallintojärjestelmällä tarkoitetaan järjestelmää, jossa keskushallinto on hajautettu sekä vertikaalisesti alueellisille tasoille että horisontaalisesti ei-valtiollisille toimijoille. Polysentrinen hallintojärjestelmä menee vielä askeleen pidemmälle. Polysentrinen hallintojärjestelmä on hallintotapa, jonka puitteissa kaikentyyppiset toimijat, sekä yksityiset että julkiset, voivat osallistua hallintoon, siis esimerkiksi hallitukset, edunvalvontajärjestöt, kaupalliset yritykset jne. Kansainvälinen lainsäädäntö määrittelee yleiset tasot, mutta konkreettiset toimenpiteet voidaan päättää paikallisella tasolla eri toimijoiden välisessä yhteistyössä. Tämän tyyppisissä hallintojärjestelmissä merenkulkualan todellinen, kansainvälinen mutta toisaalta paikallinen, toimintaympäristö tulisi otetuksi paremmin huomioon kuin järjestelmässä, joka perustuu kansallisvaltioiden keskenään yhteistyössä tekemään sääntelyyn. Tällainen muutos meriturvallisuuden hallinnassa vaatisi kuitenkin suurta periaatteellista suunnanmuutosta, jollaisen toteutumista ei voi pitää kovin todennäköisenä ainakaan lyhyellä tähtäimellä.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation considers the segmental durations of speech from the viewpoint of speech technology, especially speech synthesis. The idea is that better models of segmental durations lead to higher naturalness and better intelligibility. These features are the key factors for better usability and generality of synthesized speech technology. Even though the studies are based on a Finnish corpus the approaches apply to all other languages as well. This is possibly due to the fact that most of the studies included in this dissertation are about universal effects taking place on utterance boundaries. Also the methods invented and used here are suitable for any other study of another language. This study is based on two corpora of news reading speech and sentences read aloud. The other corpus is read aloud by a 39-year-old male, whilst the other consists of several speakers in various situations. The use of two corpora is twofold: it involves a comparison of the corpora and a broader view on the matters of interest. The dissertation begins with an overview to the phonemes and the quantity system in the Finnish language. Especially, we are covering the intrinsic durations of phonemes and phoneme categories, as well as the difference of duration between short and long phonemes. The phoneme categories are presented to facilitate the problem of variability of speech segments. In this dissertation we cover the boundary-adjacent effects on segmental durations. In initial positions of utterances we find that there seems to be initial shortening in Finnish, but the result depends on the level of detail and on the individual phoneme. On the phoneme level we find that the shortening or lengthening only affects the very first ones at the beginning of an utterance. However, on average, the effect seems to shorten the whole first word on the word level. We establish the effect of final lengthening in Finnish. The effect in Finnish has been an open question for a long time, whilst Finnish has been the last missing piece for it to be a universal phenomenon. Final lengthening is studied from various angles and it is also shown that it is not a mere effect of prominence or an effect of speech corpus with high inter- and intra-speaker variation. The effect of final lengthening seems to extend from the final to the penultimate word. On a phoneme level it reaches a much wider area than the initial effect. We also present a normalization method suitable for corpus studies on segmental durations. The method uses an utterance-level normalization approach to capture the pattern of segmental durations within each utterance. This prevents the impact of various problematic variations within the corpora. The normalization is used in a study on final lengthening to show that the results on the effect are not caused by variation in the material. The dissertation shows an implementation and prowess of speech synthesis on a mobile platform. We find that the rule-based method of speech synthesis is a real-time software solution, but the signal generation process slows down the system beyond real time. Future aspects of speech synthesis on limited platforms are discussed. The dissertation considers ethical issues on the development of speech technology. The main focus is on the development of speech synthesis with high naturalness, but the problems and solutions are applicable to any other speech technology approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the past decades testing has matured from ad-hoc activity into being an integral part of the development process. The benefits of testing are obvious for modern communication systems, which operate in heterogeneous environments amongst devices from various manufacturers. The increased demand for testing also creates demand for tools and technologies that support and automate testing activities. This thesis discusses applicability of visualization techniques in the result analysis part of the testing process. Particularly, the primary focus of this work is visualization of test execution logs produced by a TTCN-3 test system. TTCN-3 is an internationally standardized test specification and implementation language. The TTCN-3 standard suite includes specification of a test logging interface and a graphical presentation format, but no immediate relationship between them. This thesis presents a technique for mapping the log events to the graphical presentation format along with a concrete implementation, which is integrated with the Eclipse Platform and the OpenTTCN Tester toolchain. Results of this work indicate that for majority of the log events, a visual representation may be derived from the TTCN-3 standard suite. The remaining events were analysed and three categories relevant in either log analysis or implementation of the visualization tool were identified: events indicating insertion of something into the incoming queue of a port, events indicating a mismatch and events describing the control flow during the execution. Applicability of the results is limited into the domain of TTCN-3, but the developed mapping and the implementation may be utilized with any TTCN-3 tool that is able to produce the execution log in the standardized XML format.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Integrins are a family of transmembrane glycoproteins, composed of two different subunits (alpha and beta). Altered expression of integrins in tumor cells contributes to metastasis tendency by influencing on the cells‟ attachment to adjacent cells and their migration. Viral pathogens, including certain enteroviruses, use integrins as receptors. Enteroviruses have also been suggested to be involved in the etiopathogenesis of type 1 diabetes. The study focuses on the role of integrins in the pathogenesis of metastasis to cortical bone and on type 1 diabetes (T1D) and echovirus 1 infection. In the first part of the thesis, the role of different integrins in the initial attachment of MDA-MD-231 breast cancer cells to bovine cortical bone disks was studied. A close correlation between alpha2beta1 and alpha3beta1 integrin receptor expression and the capability of the tumor to attach to bone were observed. In the second part, a possible correlation between susceptibility to enterovirus infections in diabetic children and differences in enterovirus receptor genes, including certain integrins, was investigated. In parallel, virus-specific neutralizing antibodies and diabetic risk alleles were studied. In the diabetic group, an amino acid change was detected in the polio virus receptor and the neutralizing antibody titers against echovirus 30 were lower. However, to obtain statistically sustainable results, a larger number of individuals should be analyzed. Echovirus 1 (EV1) enters cells by attaching to the alpha2I domain of the alpha2beta1 integrin. In the third part EV1 was shown to attach to a chimeric receptor construct of the transferrin receptor and the alpha2I domain and to enter cells through clathrin-mediated endocytosis that is normally not used by the virus. The chimeric receptor was recycled to the plasma membrane, whereas the virus remained in intracellular vesicles. The virus replication cycle was initiated in these cells, suggesting that evolution pressure could possibly cause the virus to evolve to use a different entry mechanism. Moreover, a cDNA microarray analysis of host gene expression during EV1 replication showed that 0.53% of the total genes, including several immediate early genes, were differently expressed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biofilms are surface-attached multispecies microbial communities that are embedded by their self-produced extracellular polymeric substances. This lifestyle enhances the survival of the bacteria and plays a major role in many chronic bacterial infections. For instance, periodontitis is initiated by multispecies biofilms. The phases of active periodontal tissue destruction and notably increased levels of proinflammatory mediators, such as the key inflammatory mediator interleukin (IL)-1beta, are typical of the disease. The opportunistic periodontal pathogen Aggregatibacter actinomycetemcomitans is usually abundant at sites of aggressive periodontitis. Despite potent host immune system responses to subgingival invaders, A. actinomycetemcomitans is able to resist clearance attempts. Moreover, some strains of A. actinomycetemcomitans can generate genetic diversity through natural transformation, which may improve the species’ adjustment tothe subgingival environment in the long term. Some biofilm forming species are known to bind and sense human cytokines. As a response to cytokines, bacteria may increase biofilm formation and alter their expression of virulence genes. Specific outer membrane receptors for interferon-γ or IL-1β have been characterised in two Gram-negative pathogens. Because little is known about periodontal pathogens’ ability to sense cytokines, we used A. actinomycetemcomitans as a model organism to investigate how the species responds to IL-1beta. The main aims of this thesis were to explore cytokine binding on single-species A. actinomycetemcomitans biofilms and to determine the effects of cytokines on the biofilm formation and metabolic activity of the species. Additionally, the cytokine’s putative internalisation and interaction with A. actinomycetemcomitans proteins were studied. The possible impact of biofilm IL-1beta sequestering on the proliferation and apoptosis of gingival keratinocyte cells was evaluated in an organotypic mucosa co-culture model. Finally, the role of the extramembranous domain of the outer membrane protein HofQ (emHofQ) in DNA binding linked to DNA uptake in A. actinomycetemcomitans was examined. Our main finding revealed that viable A. actinomycetemcomitans biofilms can bind and take up the IL-1β produced by gingival cells. At the sites of pathogen-host interaction, the proliferation and apoptosis of gingival keratinocytes decreased slightly. Notably, the exposure of biofilms to IL-1beta caused their metabolic activity to drop, which may be linked to the observed interaction of IL-1beta with the conserved intracellular proteins DNA binding protein HU and the trimeric form of ATP synthase subunit beta. A Pasteurellaceaespecific lipoprotein, which had no previously determined function, was characterized as an IL-1beta interacting membrane protein that was expressed in the biofilm cultures of all tested A. actinomycetemcomitans strains. The use of a subcellular localisation tool combined with experimental analyses suggested that the identified lipoprotein, bacterial interleukin receptor I (BilRI), may be associated with the outer membrane with a portion of the protein oriented towards the external milieu. The results of the emHofQ study indicated that emHofQ has both the structural and functional capability to bind DNA. This result implies that emHofQ plays a role in DNA assimilation. The results from the current study also demonstrate that the Gram-negative oral species appears to sense the central proinflammatory mediator IL-1beta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Professions are a special category of occupations that possesses exclusive rights over its domain of expertise. Professions apply expert knowledge in their work by using professional discretion and judgment to solve their clients’ problems. With control over their expert knowledge base, professions are able to control the supply of practitioners in their field and regulate the practice in their market. Professionalization is the process during which occupations attempt to gain the status of a profession. The benefits of becoming a profession are extensive – professional autonomy, social and financial rewards, prestige, status, and an exclusive community are only a few of the privileges that established professions possess. Many aspiring occupations have tried and failed to gain the status of a profession and one of these groups is the occupation of controllers in Finland. The objective of this study to uncover, why controllers have not professionalized, which properties of the occupation correspond with the elements generally regarded to pertain to professions, and which aspects of the occupational group may hinder the professionalization project. The professionalization project of controllers is analyzed using a multi-actor model of professionalization, in which practitioners, clients, the state, training institutions, and employing organizations are considered to affect the project. The properties of the occupation of controllers are compared to features generally associated with professions. The research methodology for this thesis is qualitative, and the study is conducted as an exploratory research. The data is primarily gathered using semi-structured interviews, which were conducted between March and May 2013 lasting from 40 minutes to an hour. In total, four controllers were interviewed, who worked for different companies operating in different industries, and whose experience of working as a controller varied between a few years to nearly 15 years. The data in this study indicates that although controllers possess qualities that distinguish professions from other occupational groups, the professionalization of controllers may not be plausible. Controllers enjoy considerable autonomy in organizations, and they possess a strong orientation towards serving their clients. The more profound problem with the occupation is its non-exclusive, indistinct knowledge base that does not rely solely on a single knowledge base. Controllers’ expertise is relatively organization-specific and built on several different fields of knowledge and not just management accounting, which could be considered as their primary knowledge base. In addition, controllers have not organized themselves, which is a quintessential, but by no means a sufficient prerequisite for professionalization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Smart home implementation in residential buildings promises to optimize energy usage and save significant amount of energy simply due to a better understanding of user's energy usage profile. Apart from the energy optimisation prospects of this technology, it also aims to guarantee occupants significant amount of comfort and remote control over home appliances both at home locations and at remote places. However, smart home investment just like any other kind of investment requires an adequate measurement and justification of the economic gains it could proffer before its realization. These economic gains could differ for different occupants due to their inherent behaviours and tendencies. Thus it is pertinent to investigate the various behaviours and tendencies of occupants in different domain of interests and to measure the value of the energy savings accrued by smart home implementations in these domains of interest in order to justify such economic gains. This thesis investigates two domains of interests (the rented apartment and owned apartment) for primarily two behavioural tendencies (Finland and Germany) obtained from observation and corroborated by conducted interviews to measure the payback time and Return on Investment (ROI) of their smart home implementations. Also, similar measures are obtained for identified Australian use case. The research finding reveals that building automation for the Finnish behavioural tendencies seems to proffers a better ROI and payback time for smart home implementations.