59 resultados para Particle-based Model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The objective of this master’s thesis was to develop a model for mobile subscription acquisition cost, SAC, and mobile subscription retention cost, SRC, by applying activity-based cost accounting principles. The thesis was conducted as a case study for a telecommunication company operating on the Finnish telecommunication market. In addition to activity-based cost accounting there were other theories studied and applied in order to establish a theory framework for this thesis. The concepts of acquisition and retention were explored in a broader context with the concepts of customer satisfaction, loyalty and profitability and eventually customer relationship management to understand the background and meaning of the theme of this thesis. The utilization of SAC and SRC information is discussed through the theories of decision making and activity-based management. Also, the present state and future needs of SAC and SRC information usage at the case company as well as the functions of the company were examined by interviewing some members of the company personnel. With the help of these theories and methods it was aimed at finding out both the theory-based and practical factors which affect the structure of the model. During the thesis study it was confirmed that the existing SAC and SRC model of the case company should be used as the basis in developing the activity-based model. As a result the indirect costs of the old model were transformed into activities and the direct costs were continued to be allocated directly to acquisition of new subscriptions and retention of old subscriptions. The refined model will enable managing the subscription acquisition, retention and the related costs better through the activity information. During the interviews it was found out that the SAC and SRC information is also used in performance measurement and operational and strategic planning. SAC and SRC are not fully absorbed costs and it was concluded that the model serves best as a source of indicative cost information. This thesis does not include calculating costs. Instead, the refined model together with both the theory-based and interview findings concerning the utilization of the information produced by the model will serve as a framework for the possible future development aiming at completing the model.
Resumo:
New luminometric particle-based methods were developed to quantify protein and to count cells. The developed methods rely on the interaction of the sample with nano- or microparticles and different principles of detection. In fluorescence quenching, timeresolved luminescence resonance energy transfer (TR-LRET), and two-photon excitation fluorescence (TPX) methods, the sample prevents the adsorption of labeled protein to the particles. Depending on the system, the addition of the analyte increases or decreases the luminescence. In the dissociation method, the adsorbed protein protects the Eu(III) chelate on the surface of the particles from dissociation at a low pH. The experimental setups are user-friendly and rapid and do not require hazardous test compounds and elevated temperatures. The sensitivity of the quantification of protein (from 40 to 500 pg bovine serum albumin in a sample) was 20-500-fold better than in most sensitive commercial methods. The quenching method exhibited low protein-to-protein variability and the dissociation method insensitivity to the assay contaminants commonly found in biological samples. Less than ten eukaryotic cells were detected and quantified with all the developed methods under optimized assay conditions. Furthermore, two applications, the method for detection of the aggregation of protein and the cell viability test, were developed by utilizing the TR-LRET method. The detection of the aggregation of protein was allowed at a more than 10,000 times lower concentration, 30 μg/L, compared to the known methods of UV240 absorbance and dynamic light scattering. The TR-LRET method was combined with a nucleic acid assay with cell-impermeable dye to measure the percentage of dead cells in a single tube test with cell counts below 1000 cells/tube.
Resumo:
The aim of this study is to test the accrual-based model suggested by Dechow et al. (1995) in order to detect and compare earnings management practices in Finnish and French companies. Also the impact of financial crisis of 2008 on earnings management behavior in these countries is tested by dividing the whole time period of 2003-2012 into two sub-periods: pre-crisis (2003-2008) and post-crisis (2009-2012). Results support the idea that companies in both countries have significant earnings management practices. During the post-crisis period companies in Finland show income inflating practices, while in France the opposite tendency is noticed (income deflating) during the same period. Results of the assumption that managers in highly concentrated companies are engaged in income enhancing practices vary in two countries. While in Finland managers are trying to show better performance for bonuses or other contractual compensation motivations, in France they avoid paying dividends or high taxes.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
In this work an agent based model (ABM) was proposed using the main idea from the Jabłonska-Capasso-Morale (JCM) model and maximized greediness concept. Using a multi-agents simulator, the power of the ABM was assessed by using the historical prices of silver metal dating from the 01.03.2000 to 01.03.2013. The model results, analysed in two different situations, with and without maximized greediness, have proven that the ABM is capable of explaining the silver price dynamics even in utmost events. The ABM without maximal greediness explained the prices with more irrationalities whereas the ABM with maximal greediness tracked the price movements with more rational decisions. In the comparison test, the model without maximal greediness stood as the best to capture the silver market dynamics. Therefore, the proposed ABM confirms the suggested reasons for financial crises or markets failure. It reveals that an economic or financial collapse may be stimulated by irrational and rational decisions, yet irrationalities may dominate the market.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
The three alpha2-adrenoceptor (alpha2-AR) subtypes belong to the G protein-coupled receptor superfamily and represent potential drug targets. These receptors have many vital physiological functions, but their actions are complex and often oppose each other. Current research is therefore driven towards discovering drugs that selectively interact with a specific subtype. Cell model systems can be used to evaluate a chemical compound's activity in complex biological systems. The aim of this thesis was to optimize and validate cell-based model systems and assays to investigate alpha2-ARs as drug targets. The use of immortalized cell lines as model systems is firmly established but poses several problems, since the protein of interest is expressed in a foreign environment, and thus essential components of receptor regulation or signaling cascades might be missing. Careful cell model validation is thus required; this was exemplified by three different approaches. In cells heterologously expressing alpha2A-ARs, it was noted that the transfection technique affected the test outcome; false negative adenylyl cyclase test results were produced unless a cell population expressing receptors in a homogenous fashion was used. Recombinant alpha2C-ARs in non-neuronal cells were retained inside the cells, and not expressed in the cell membrane, complicating investigation of this receptor subtype. Receptor expression enhancing proteins (REEPs) were found to be neuronalspecific adapter proteins that regulate the processing of the alpha2C-AR, resulting in an increased level of total receptor expression. Current trends call for the use of primary cells endogenously expressing the receptor of interest; therefore, primary human vascular smooth muscle cells (SMC) expressing alpha2-ARs were tested in a functional assay monitoring contractility with a myosin light chain phosphorylation assay. However, these cells were not compatible with this assay due to the loss of differentiation. A rat aortic SMC cell line transfected to express the human alpha2B-AR was adapted for the assay, and it was found that the alpha2-AR agonist, dexmedetomidine, evoked myosin light chain phosphorylation in this model.
Resumo:
The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.
Resumo:
Diplomityön tavoitteena oli erään prosessiteollisuuden yrityksen toimitusketjun kehittäminen pääasiakkaan toimitusten osalta. Ongelmana olivat epäoptimaalinen tuotannon lajinvaihto ja toimitusketjun jakelutoimintojen tehottomuus asiakkaan tilausmäärien huonon ennustettavuuden vuoksi. Työssä keskityttiin tuotannon lajinvaihto- ja jakelutoimintoihin sekä niiden kustannusten määrittämiseen toimintolaskennan avulla. Päätavoitteena oli mallintaa Excel-pohjainen työkalu, joka soveltuu tuotannon lajinvaihdon, kuljetusten ja jakeluvarastoinnin karkeasuunnitteluun tarpeiden ja kapasiteetin yhteensovittamiseksi sekä em. kustannusten määrittämiseksi. Työkalua voidaan käyttää myös analysointityökaluna simuloimalla kustannuksia eri toimintavaihtoehdoilla ja tarkastelemalla kustannusten suhdetta toisiinsa. Työkalun avulla laskettiin tuotantotiheyden, kuljetuseräkoon ja tuotantoerien lukumäärän kustannusvaikutukset. Edellytyksenä työkalun käytölle suunnittelussa ja analysoinnissa ovat viikkokohtaiset kysyntäennusteet, joiden saamiseksi ehdotetaan yhteistyön kehittämistä asiakkaan kanssa. Työkalua voidaan jatkossa käyttää myös apuna erään tietojärjestelmän syöttötietojen määrittelyssä.
Resumo:
Tutkielman tavoitteena on kuvata pankkien vakavaraisuusuudistuksen eri osa-alueita. Tarkempi analyysi rajautuu uudistuksen tuomiin muutoksiin luotto- ja operatiivisen riskin pääomavaateissa. Tutkielman empiirisen osuuden tavoitteena on perehtyä vakavaraisuussäännöstön uudistusten vaikutuksiin Nordeassa. Tutkimusmetodologiaksi on valittu normatiivinen tutkimusote. Lisäksi tutkielma sisältää deskriptiivisiä ja positivistisia osia. Lähdeaineisto koostuu Baselin pankkivalvontakomitean ja Suomen Pankin julkaisemista tutkimuksista ja dokumenteista sekä alan julkaisuissa ilmestyneistä artikkeleista. Pankkien vakavaraisuussäännöstöuudistuksen tavoitteena on lisätä rahoitusmarkkinoiden vakautta. Sääntelyn kautta pyritään turvaamaan pankkien varojen riittävyys suhteessa niiden riskien ottoon. Vakavaraisuussäännöstön uudistus muodostuu kolmesta pilarista: (1) minimipääomavaatimuksista, (2) pankkivalvonnan vahvistamisesta ja (3) markkinakurin hyödyntämisestä luottolaitosten toiminnan julkistamisvaatimuksia lisäämällä. Pankkivalvonnan harmonisoinnista vallitsee kansainvälinen yhteisymmärrys, mutta ennen kuin Basel II voi astua voimaan on useita ongelmia ratkaisematta. Baselin vakavaraisuuskehikko ei ole ainut lähitulevaisuudessa pankkitoimialaa koetteleva uudistus. Kansainväliset tilinpäätösstandardit; International Accounting Standards ja erityisesti IAS 39 sekä International Financial Reporting Standards, lyhyemmin IFRS tulevat muuttamaan merkittävästi pankkien tilinpäätöskäyttäytymistä. Epäselvää on vielä kuitenkin tukevatko uudistukset toisiaan ja missä määrin pankkien tulosvolatiliteetin odotetaan kasvavan. Tutkielmassa pohditaan vakavaraisuussäännöstön uudistuksen hyötyjä kansainvälisen kilpailuneutraliteetin osalta, sillä Yhdysvalloissa uudistus koskee vain suurimpia pankkeja. Tutkielmassa paneudutaan lisäksi uudistuksen mahdolliseen talouden syklejä voimistavaan vaikutukseen ja tarkastellaan parannusehdotuksia prosyklisyyden hillitsemiseksi. Yksi vakavaraisuusuudistuksen tärkeimmistä tehtävistä on luoda pankeille kannustin kehittää omia riskienhallinta malleja. Kannustin ongelma on pyritty ratkaisemaan vapaampien sisäisten mallien menetelmien avulla. Ongelmaa ei ole pystytty kuitenkaan ratkaisemaan aivan täysin, sillä luottoriskien osalta pankkien lainaportfolioiden rakenne määrittää sen, hyötyvätkö pankit siirtymisestä sisäisten mallien menetelmän käyttöön. Tutkielma sisältää myös Nordean arvion vakavaraisuusuudistuksen vaikutuksista pankkitoimialaan.
Resumo:
Diplomityön aiheena oli tutkia case-yrityksen prosessikehitystä, prosessijohtamista sekä lähiverkkopalveluketjua prosessinäkökulman kannalta. Tutkimuksen päätavoitteina oli selvittää, kuinka lähiverkkopalveluketjun toimintaa voisi tehostaa, ja minkälaisia kertakustannuksia yhdestä keskiverotoimituksesta syntyy. Tavoitteina oli myös selvittää yrityksen prosessikehittämisen tila sekä minkälaisia prosessijohtamismalleja yrityksessä käytetään ja millä tavoin sen pääprosesseja johdetaan. Tutkimus oli case-tyyppinen, eli siinä tutkittiin yhden yrityksen toimintaa ja prosesseja. Teoreettinen pohja tutkimukselle luotiin käsittelemällä liiketoimintaprosesseihin liittyviä ydinmääritelmiä, erilaisia prosessijohtamismalleja ja liiketoimintaprosessien mittaamista. Tutkimuksen peruselementit tiedon hankintaan olivat kirjallisuus ja haastattelut. Lisäksi tutkimus pohjasi vahvasti tutkijan tekemiin omiin havaintoihin yrityksessä ja sen toimintaympäristössä. Tutkimuksen ensimmäisessä vaiheessa kartoitettiin lähiverkkopalveluketjun senhetkinen tilanne, analysoitiin palveluketjun sisältämiä prosesseja ja kirjattiin ylös havaitut ongelmakohdat. Haastatteluista kerättyjen tietojen perusteella laskettiin yhdestä keskivertolähiverkkotoimituksesta case-yritykselle syntyvät kertakustannukset. Tämän lisäksi analysoitiin yrityksen senhetkistä prosessijohtamis- ja prosessikehitystasoa sekä peilattiin niitä yrityksen tulevaisuuden prosessikehittämistoimiin. Lopuksi johtopäätöksien kautta esitettiin kaksi toimintaehdotusta, joilla palveluketjun toimintaa voitaisiin parantaa. Ensimmäinen ehdotus oli radikaalimpi, ja siinä siirryttäisiin kokonaan pois palveluketjumallista tiimipohjaiseen malliin. Toisessa ehdotuksessa keskityttiin korjaamaan palveluketjun ongelmakohtia.
Resumo:
Tämän työn tarkoituksena oli luoda kokonaisvaltainen tuotannollinen simulaatiomalli vaneritehtaasta sekä tutkia kustannuslaskennan mahdollisuuksia mallin yhteydessä. Perusolettamuksena on, että jos tuotannollinen malli toimii esikuvansa mukaisesti, myös sillä laskettuun kustannustietoon voidaan luottaa. Johdantona on tarkasteltu työn perustana olevia teorialähteitä. Ensimmäisenä asiana on esitetty vanerin valmistusprosessia ja siinä käytettyjä linja- ja laitetyyppejä. Toisena asiana on esitetty simulaatiotutkimuksen periaatteita, lainalaisuuksia ja mahdollisuuksia. Lisäksi on tarkasteltu kustannuslaskentaa, sen eri periaatteita ja muotoja sekä tuotannon- ja varastonohjausta. Aineistona ja menetelminä työssä on esitetty simulaatiomallin luomiseen tarvittavan pohjatiedon kerääminen sekä soveltaminen. Sitten on kuvattu tehdasmallin muodostavat eri tuotantolinjoja kuvaavat komponentit ja mallin käyttöliittymä. Lopuksi on kuvattu teoreettisen tehdasmallin soveltaminen todellisen vaneritehtaan mukaiseksi. Sovelletulla mallilla on ajettu kolme erilaista vuorokausituotantoa ja niistä saatua kustannustietoa on vertailtu optimitilannetta kuvaavaan taulukkoon.
Resumo:
Modern sophisticated telecommunication devices require even more and more comprehensive testing to ensure quality. The test case amount to ensure well enough coverage of testing has increased rapidly and this increased demand cannot be fulfilled anymore only by using manual testing. Also new agile development models require execution of all test cases with every iteration. This has lead manufactures to use test automation more than ever to achieve adequate testing coverage and quality. This thesis is separated into three parts. Evolution of cellular networks is presented at the beginning of the first part. Also software testing, test automation and the influence of development model for testing are examined in the first part. The second part describes a process which was used to implement test automation scheme for functional testing of LTE core network MME element. In implementation of the test automation scheme agile development models and Robot Framework test automation tool were used. In the third part two alternative models are presented for integrating this test automation scheme as part of a continuous integration process. As a result, the test automation scheme for functional testing was implemented. Almost all new functional level testing test cases can now be automated with this scheme. In addition, two models for integrating this scheme to be part of a wider continuous integration pipe were introduced. Also shift from usage of a traditional waterfall model to a new agile development based model in testing stated to be successful.