17 resultados para attempt to obtain disclosure
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tartraatti-resistentin happaman fosfataasin hiljentäminen RNAi menetelmällä: odottamaton vaikutus monosyytti-makrofagi linjan soluissa RNA interferenssi (RNAi) eli RNA:n hiljentyminen löydettiin ensimmäisenä kasveissa, ja 2000-luvulla RNAi menetelmä on otettu käyttöön myös nisäkässoluissa. RNAi on mekanismi, jossa lyhyet kaksi juosteiset RNA molekyylit eli siRNA:t sitoutuvat proteiinikompleksiin ja sitoutuvat komplementaarisesti proteiinia koodaavaan lähetti RNA:han katalysoiden lähetti RNA:n hajoamisen. Tällöin RNA:n koodaamaa proteiinia ei solussa tuoteta. Tässä työssä on RNA interferenssi menetelmän avuksi kehitetty uusi siRNA molekyylien suunnittelualgoritmi siRNA_profile, joka etsii lähetti RNA:sta geenin hiljentämiseen sopivia kohdealueita. Optimaalisesti suunnitellulla siRNA molekyylillä voi olla mahdollista saavuttaa pitkäaikainen geenin hiljeneminen ja spesifinen kohdeproteiinin määrän aleneminen solussa. Erilaiset kemialliset modifikaatiot, mm. 2´-Fluoro-modifikaatio, siRNA molekyylin riboosirenkaassa lisäsivät siRNA molekyylin stabiilisuutta veren plasmassa sekä siRNA molekyylin tehokkuutta. Nämä ovat tärkeitä siRNA molekyylien ominaisuuksia kun RNAi menetelmää sovelletaan lääketieteellisiin tarkoituksiin. Tartraatti-resistentti hapan fosfataasi (TRACP) on entsyymi, joka esiintyy luunsyöjäsoluissa eli osteoklasteissa, antigeenejä esittelevissä dendiriittisissä soluissa sekä eri kudosten makrofageissa, jotka ovat syöjäsoluja. TRACP entsyymin biologista tehtävää ei ole saatu selville, mutta oletetaan että TRACP entsyymin kyvyllä tuottaa reaktiivisia happiradikaaleja on tehtävä sekä luuta hajoittavissa osteoklasteissa sekä antigeenia esittelevissä dendriittisissä soluissa. Makrofageilla, jotka yliekpressoivat TRACP entsyymiä, on myös solunsisäinen reaktiivisten happiradikaalien tuotanto sekä bakteerin tappokyky lisääntynyt. TRACP-geenin hiljentämiseen tarkoitetut spesifiset DNA ja siRNA molekyylit aiheuttivat monosyytti-makrofagilinjan soluviljelymallissa TRACP entsyymin tuoton lisääntymistä odotusten vastaisesti. DNA ja RNA molekyylien vaikutusta TRACP entsyymin tuoton lisääntymiseen tutkittiin myös Tolllike reseptori 9 (TLR9) poistogeenisestä hiirestä eristetyissä monosyyttimakrofaagisoluissa. TRACP entsyymin tuoton lisääntyminen todettiin sekvenssistä ja TLR9:stä riippumattomaksi vasteeksi solun ulkopuolisia DNA ja RNA molekyylejä vastaan. Havainto TRACP entsyymin tuoton lisääntymisestä viittaa siihen, että TRACP entsyymillä on tehtävä solun immuunipuolustusjärjestelmässä.
Resumo:
Aerosolien vaikutusta säteilynkulkuun ilmakehässä tutkitaan muun muassa niiden ilmastonmuutokseen vaikuttavien ominaisuuksien sekä erilaisten maanpuolustukseen liittyvien sovellusten vuoksi. Tässä työssä keskityttiin aerosolien aiheuttamaan infrapunasäteilyn vaimenemiseen horisontaalisella polulla. Mitattuja vaimennuksen arvoja verrattiin mallinnettuihin ja etsittiin syitä niiden välisiin eroihin. Työn alussa tutustuttiin aerosolien tyypillisiin ominaisuuksiin sekä säteilyn ja aerosolien väliseen vuorovaikutukseen. Tämän jälkeen esiteltiin mittauksissa käytetyt laitteet sekä mallinnuksessa käytetty ohjelmisto ja tiedonkäsittely. Työssä tutkittiin mitattujen ja mallinnettujen aerosolivaimennuskertoimien käyttäytymistä eri sääpametrien (näkyvyys, suhteellinen kosteus ja lämpötila) suhteen. Tutkimuksessa tarkasteltiin myös mitattuja kokojakaumia ja niitä verrattiin mallinnuksessa käytettyihin. Mittalaitteiden epävarmuuksien ja käyttökelpoisten mittaustulosten vähyyden johdosta täyttä varmuutta mittauksien tarkkuudesta ei saavutettu. Mittausten epävarmuudesta huolimatta kävi ilmi, että mitatut kokojakaumat eivät täysin vastaa mallin käyttämiä kokojakaumia. Tulevaisuudessa mittauksia pitää jatkaa, jotta tiedon analysointiin saadaan tarpeeksi kelvollisia mittaustuloksia ja mittalaitteiden luotettavuus selviää.
Resumo:
Tyypin 1 diabeteksen perinnöllinen alttius Suomessa - HLA-alueen ulkopuolisten alttiuslokusten IDDM2 ja IDDM9 rooli taudin periytymisessä HLA-alue, joka sijaitsee kromosomissa 6p21.3, vastaa noin puolesta perinnöllisestä alttiudesta sairastua tyypin 1 diabetekseen. Myös HLA-alueen ulkopuolisten lokusten on todettu liittyvän sairausalttiuteen. Näistä kolmen lokuksen on varmistettu olevan todellisia alttiuslokuksia ja lisäksi useiden muiden, vielä varmistamattomien lokusten, on todettu liittyvän sairausalttiuteen. Tässä tutkimuksessa 12:n HLA-alueen ulkopuolisen alttiuslokuksen kytkentä tyypin 1 diabetekseen tutkittiin käyttäen 107:aa suomalaista multiplex-perhettä. Jatkotutkimuksessa analysoitiin IDDM9-alueen kytkentä ja assosiaatio sairauteen laajennetuissa perhemateriaaleissa sekä IDDM2-alueen mahdollinen interaktio HLA-alueen kanssa sairauden muodostumisessa. Lisäksi suoritettiin IDDM2-alueen suojaavien haplotyyppien alatyypitys tarkoituksena tutkia eri haplotyyppien käyttökelpoisuutta sairastumisriskin tarkempaa ennustamista varten. Ensimmäisessä kytkentätutkimuksessa ei löytynyt koko genomin tasolla merkitsevää tai viitteellistä kytkentää tutkituista HLA-alueen ulkopuolisista lokuksista. Voimakkain havaittu nimellisen merkitsevyyden tavoittava kytkentä nähtiin IDDM9-alueen markkerilla D3S3576 (MLS=1.05). Tutkimuksessa ei kyetty varmistamaan tai sulkemaan pois aiempia kytkentähavaintoja tutkituilla lokuksilla, mutta IDDM9-alueen jatkotutkimuksessa havaittu voimakas kytkentä (MLS=3.4) ja merkitsevä assosiaatio (TDT p=0.0002) viittaa vahvasti siihen, että 3q21-alueella sijaitsee todellinen tyypin 1 diabeteksen alttiusgeeni, jolloin alueen kattava assosiaatiotutkimus olisi perusteltu jatkotoimenpide. Sairauteen altistava IDDM2-alueen MspI-2221 genotyyppi CC oli nimellisesti yleisempi matalan tai kohtalaisen HLA-sairastumisriskin diabeetikoilla, verrattuna korkean HLA-riskin potilaisiin (p=0.05). Myös genotyyppijakauman vertailu osoitti merkitsevää eroa ryhmien välillä (p=0.01). VNTR-haplotyyppitutkimus osoitti, että IIIA/IIIA-homotsygootin sairaudelta suojaava vaikutus on merkitsevästi voimakkaampi kuin muiden luokka III:n genotyypeillä. Nämä tulokset viittaavat IDDM2-HLA -vuorovaikutukseen sekä siihen että IDDM2-alueen haplotyyppien välillä esiintyy etiologista heterogeniaa. Tämän johdosta IDDM2-alueen haplotyyppien tarkempi määrittäminen voisi tehostaa tyypin 1 diabeteksen riskiarviointia.
Resumo:
Suihku/viira-nopeussuhde on perälaatikon huulisuihkun ja viiran välinen nopeusero. Se vaikuttaa suuresti paperin ja kartongin loppuominaisuuksiin, kuten formaatioon sekä kuituorientaatioon ja näin ollen paperin lujuusominaisuuksiin. Tämän johdosta on erityisen tärkeää tietää todellinen suihku/viira-nopeussuhde paperin- ja kartonginvalmistuksessa. Perinteinen suihku/viira-nopeussuhteen määritysmenetelmä perustuu perälaatikon kokonaispaineeseen. Tällä menetelmällä kuitenkin todellinen huulisuihkun nopeus saattaa usein jäädä tietämättä johtuen mahdollisesta virheellisestä painemittarin kalibroinnista sekä laskuyhtälön epätarkkuuksista. Tämän johdosta on kehitetty useita reaaliaikaisia huulisuihkun mittausmenetelmiä. Perälaatikon parametrien optimaaliset asetukset ovat mahdollista määrittää ja ylläpitää huulisuihkun nopeuden “on-line” määrityksellä. Perälaatikon parametrejä ovat mm. huulisuihku, huuliaukon korkeusprofiili, reunavirtaukset ja syöttövirtauksen tasaisuus. Huulisuihkun nopeuden on-line mittauksella paljastuu myös muita perälaatikon ongelmakohtia, kuten mekaaniset viat, joita on perinteisesti tutkittu aikaa vievillä paperin ja kartongin lopputuoteanalyyseillä.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Drug transporting membrane proteins are expressed in various human tissues and blood-tissue barriers, regulating the transfer of drugs, toxins and endogenous compounds into or out of the cells. Various in vitro and animal experiments suggest that P-glycoprotein (P-gp) forms a functional barrier between maternal and fetal blood circulation in the placenta thereby protecting the fetus from exposure to xenobiotics during pregnancy. The multidrug resistance-associated protein 1 (MRP1) is a relatively less studied transporter protein in the human placenta. The aim of this study series was to study the role of placental transporters, apical P-gp and basal MRP1, using saquinavir as a probe drug, and to study transfer of quetiapine and the role of P-gp in its transfer in the dually perfused human placenta/cotyledon. Furthermore, two ABCB1 (encoding P-gp) polymorphisms (c.3435C>T, p.Ile1145Ile and c.2677G>T/A, p.Ala893Ser/Thr) were studied to determine their impact on P-gp protein expression level and on the transfer of the study drugs. Also, the influence of the P-gp protein expression level on the transfer of the study drugs was addressed. Because P-gp and MRP1 are ATP-dependent drug-efflux pumps, it was studied whether exogenous ATP is needed for the function of ATP-dependent transporter in the present experimental model. The present results indicated that the addition of exogenous ATP was not necessary for transporter function in the perfused human placental cotyledon. Saquinavir and quetiapine were both found to cross the human placenta; transplacental transfer (TPTAUC %) for saquinavir was <0.5% and for quetiapine 3.7%. Pharmacologic blocking of P-gp led to disruption of the blood-placental barrier (BPB) and increased the placental transfer of P-gp substrate, saquinavir, into the fetal circulation by 6- to 8-fold. In reversed perfusions P-gp, MRP1 and possibly OATP2B1 had a negligible role in the fetal-to-maternal transfer of saquinavir. The TPTAUC % of saquinavir was about 100-fold greater from the fetal side to the maternal side compared with the maternal-to-fetal transfer. P-gp activity is not likely to modify the placental transfer of quetiapine. Higher P-gp protein expression levels were associated with the variant allele 3435T, but no correlation was found between the TPTAUC % of saquinavir and placental P-gp protein expression. The present results indicate that P-gp activity drastically affects the fetal exposure to saquinavir, and suggest that pharmacological blockade of the P-gp activity during pregnancy may pose an increased risk for adverse fetal outcome. The blockade of P-gp activity could be used in purpose to obtain higher drug concentration to the fetal side, for example, in prevention (to decrease virus transfer to fetal side) or in treating sick fetus.
Resumo:
The behavioural finance literature expects systematic and significant deviations from efficiency to persist in securities markets due to behavioural and cognitive biases of investors. These behavioural models attempt to explain the coexistence of intermediate-term momentum and long-term reversals in stock returns based on the systematic violations of rational behaviour of investors. The study investigates the anchoring bias of investors and the profitability of the 52-week momentum strategy (GH henceforward). The relatively highly volatile OMX Helsinki stock exchange is a suitable market for examining the momentum effect, since international investors tend to realise their positions first from the furthest security markets by the time of market turbulence. Empirical data is collected from Thomson Reuters Datastream and the OMX Nordic website. The objective of the study is to provide a throughout research by formulating a self-financing GH momentum portfolio. First, the seasonality of the strategy is examined by taking the January effect into account and researching abnormal returns in long-term. The results indicate that the GH strategy is subject to significantly negative revenues in January, but the strategy is not prone to reversals in long-term. Then the predictive proxies of momentum returns are investigated in terms of acquisition prices and 52-week high statistics as anchors. The results show that the acquisition prices do not have explanatory power over the GH strategy’s abnormal returns. Finally, the efficacy of the GH strategy is examined after taking transaction costs into account, finding that the robust abnormal returns remain statistically significant despite the transaction costs. As a conclusion, the relative distance between a stock’s current price and its 52-week high statistic explains the profits of momentum investing to a high degree. The results indicate that intermediateterm momentum and long-term reversals are separate phenomena. This presents a challenge to current behavioural theories, which model these aspects of stock returns as subsequent components of how securities markets respond to relevant information.
Resumo:
By so far, scholars have discussed how the characteristics of consumer co-operatives (cooperative principles, values and the dual role of members as the users and owners) can potentially give them a competitive advantage over investor-owned firms (IOFs). In addition, concern for the community (as partly derived from locality and regionality) has been seen as a potential source of success for consumer co-operatives. On the other hand, the geographicbound purpose of consumer co-operation causes that consumer co-operative can be regarded as a challenging company form to manage. This is because, according to the purpose of consumer co-operation, co-operatives are obligated to 1) provide the owners with services and goods that are needed and do so at more affordable prices than their competitors do and/or 2) to operate in areas in which competitors do not want to operate (for example, because of the low profitability in certain area of business or region). Thus, consumer co-operatives have to operate very efficiently in order to execute this geographic-bound corporate purpose (e.g. they cannot withdraw from the competition during the declining stages of business). However, this efficiency cannot be achieved by any means; as the acceptance from the important regional stakeholders is the basic operational precondition and lifeline in the long run. Thereby, the central question for the survival and success of consumer co-operatives is; how should the consumer co-operatives execute its corporate purpose so it can be the best alternative to its members in the long run? This question has remained unanswered and lack empirical evidence in the previous studies on the strategic management of consumer cooperation. In more detail, scholars have not yet empirically investigated the question: How can consumer co-operatives use financial and social capital to achieve a sustained competitive advantage? It is this research gap that this doctoral dissertation aims to fulfil. This doctoral dissertation aims to answer the above questions by combining and utilizing interview data from S Group co-operatives and the central organizations in S Group´s network (overall, 33 interviews were gathered), archival material and 56 published media articles/reports. The study is based on a qualitative case study approach that is aimed at theory development, not theory verification (as the theory is considered as nascent in this field of study). Firstly, the findings of this study indicate that consumer co-operatives accumulate financial capital; 1) by making profit (to invest and grow) and 2) by utilizing a network-based organizational structure (local supply chain economies). As a result of financial capital accumulation, consumer co-operatives are able to achieve efficiency gains but also remain local. In addition, a strong financial capital base increases consumer co-operatives´ independence, competitiveness and their ability to participate in regional development (which is in accordance with their geographically bound corporate purpose). Secondly, consumer cooperatives accumulate social capital through informal networking (with important regional stakeholders), corporate social responsibility (CSR) behaviour and CSR reporting, pursuing common good, and interacting and identity sharing. As a result of social capital accumulation, consumer co-operatives are able to obtain the resources for managing; 1) institutional dependencies and 2) customer relations. By accumulating both social and financial capital through the above presented actions, consumer co-operatives are able to achieve sustained competitive advantage. Finally, this thesis provides useful ideas and new knowledge for cooperative managers concerning why and how consumer co-operatives should accumulate financial and social capital (to achieve sustained competitive advantage), while aligning with their corporate purpose.
Resumo:
Ionic liquids, ILs, have recently been studied with accelerating interest to be used for a deconstruction/fractionation, dissolution or pretreatment processing method of lignocellulosic biomass. ILs are usually utilized combined with heat. Regarding lignocellulosic recalcitrance toward fractionation and IL utilization, most of the studies concern IL utilization in the biomass fermentation process prior to the enzymatic hydrolysis step. It has been demonstrated that IL-pretreatment gives more efficient hydrolysis of the biomass polysaccharides than enzymatic hydrolysis alone. Both cellulose (especially cellulose) and lignin are very resistant towards fractionation and even dissolution methods. As an example, it can be mentioned that softwood, hardwood and grass-type plant species have different types of lignin structures leading to the fact that softwood lignin (guaiacyl lignin dominates) is the most difficult to solubilize or chemically disrupt. In addition to the known conventional biomass processing methods, several ILs have also been found to efficiently dissolve either cellulose and/or wood samples – different ILs are suitable for different purposes. An IL treatment of wood usually results in non-fibrous pulp, where lignin is not efficiently separated and wood components are selectively precipitated, as cellulose is not soluble or degradable in ionic liquids under mild conditions. Nevertheless, new ILs capable of rather good fractionation performance have recently emerged. The capability of the IL to dissolve or deconstruct wood or cellulose depends on several factors, (e.g. sample origin, the particle size of the biomass, mechanical treatments as pulverization, initial biomassto-IL ratio, water content of the biomass, possible impurities of IL, reaction conditions, temperature etc). The aim of this study was to obtain (fermentable) saccharides and other valuable chemicals from wood by a combined heat and IL-treatment. Thermal treatments alone contribute to the degradation of polysaccharides (e.g. 150 °C alone is said to cause the degradation of polysaccharides), thus temperatures below that should be used, if the research interest lies on the IL effectiveness. On the other hand, the efficiency of the IL-treatment can also be enhanced to combine other treatment methods, (e.g. microwave heating). The samples of spruce, pine and birch sawdust were treated with either 1-Ethyl-3-methylimidazolium chloride, Emim Cl, or 1-Ethyl-3-methylimidazolium acetate, Emim Ac, (or with ionized water for comparison) at various temperatures (where focus was between 80 and 120 °C). The samples were withdrawn at fixed time intervals (the main interest treatment time area lied between 0 and 100 hours). Double experiments were executed. The selected mono- and disaccharides, as well as their known degradation products, 5-hydroxymethylfurfural, 5-HMF, and furfural were analyzed with capillary electrophoresis, CE, and high-performance liquid chromatography, HPLC. Initially, even GC and GC-MS were utilized. Galactose, glucose, mannose and xylose were the main monosaccharides that were present in the wood samples exposed to ILs at elevated temperatures; in addition, furfural and 5-HMF were detected; moreover, the quantitative amount of the two latter ones were naturally increasing in line with the heating time or the IL:wood ratio.
Resumo:
Statins are indicated for preventing cardiovascular disease events. Patients with diabetes have a risk of major cardiovascular events double the risk of their peers without diabetes. Thus, clinical treatment guidelines recommend statins for the management of diabetic dyslipidemia. The evidence base for statin use in cardiovascular disease derives from the randomized controlled statin trials designed to prove statin efficacy under ideal conditions, among a homogenous study population meeting strict trial eligibility criteria. This thesis was implemented as four pharmacoepidemiological statin studies using register data on realworld statin users. The overall purpose was to evaluate the trends, patterns and effectiveness of statin use in everyday life. More specifically, nationwide secular trends in statin use in Finland were analysed, especially among patient groups which had been underrepresented in the statin trials. Furthermore, the benchmarking statin trials in diabetes, the Heart Protection Study and the Collaborative Atorvastatin Diabetes Study, were evaluated for their representativeness for real-world diabetes care with the emphasis placed on adherence to statin use. The association between good adherence and the incidence of major cardiovascular events in the real-world was further investigated in diabetes. These studies demonstrate that statin initiations increased from 1995 to 2005 in Finland. The increase was most pronounced among those aged at least 75 years and was observed already before the publication of rigorous trial data conducted in elderly subjects. Thus, statins seem to have been initiated in clinical practice also going beyond the strict trial eligibility criteria. Nonetheless, low adherence to statin use among the real-world patients with diabetes was found not only to limit the representativeness of the trials for clinical care but also to attenuate in all likelihood their benefits in the real-world. In fact, good adherence to statin use was found to associate with a decreased risk for major cardiovascular events in patients with diabetes. In conclusion, these studies highlight the importance of good adherence to statin use in clinical practice in order to obtain the full therapeutic value demonstrated in the statin trials. Simply increasing the number of statin users will not alone suffice in sharing our common resources appropriately.
Resumo:
Carbon dioxide is regarded, nowadays, as a primary anthropogenic greenhouse gas leading to global warming. Hence, chemical fixation of CO2 has attracted much attention as a possible way to manufacture useful chemicals. One of the most interesting approaches of CO2 transformations is the synthesis of organic carbonates. Since conventional production technologies of these compounds involve poisonous phosgene and carbon monoxide, there is a need to develop novel synthetic methods that would better match the principles of "Green Chemistry" towards protection of the environment and human health. Over the years, synthesis of dimethyl carbonate was under intensive investigation in the academia and industry. Therefore, this study was entirely directed towards equally important homologue of carbonic esters family namely diethyl carbonate (DEC). Novel synthesis method of DEC starting from ethanol and CO2 over heterogeneous catalysts based on ceria (CeO2) was studied in the batch reactor. However, the plausible drawback of the reaction is thermodynamic limitations. The calculated values revealed that the reaction is exothermic (ΔrHØ298K = ─ 16.6 J/ ) and does not occur spontaneously at rooms temperature (ΔrGØ 298K = 35.85 kJ/mol). Moreover, co-produced water easily shifts the reaction equilibrium towards reactants excluding achievement of high yields of the carbonate. Therefore, in-situ dehydration has been applied using butylene oxide as a chemical water trap. A 9-fold enhancement in the amount of DEC was observed upon introduction of butylene oxide to the reaction media in comparison to the synthetic method without any water removal. This result confirms that reaction equilibrium was shifted in favour of the desired product and thermodynamic boundaries of the reaction were suppressed by using butylene oxide as a water scavenger. In order to obtain insight into the reaction network, the kinetic experiments were performed over commercial cerium oxide. On the basis of the selectivity/conversion profile it could be concluded that the one-pot synthesis of diethyl carbonate from ethanol, CO2 and butylene oxide occurs via a consecutive route involving cyclic carbonate as an intermediate. Since commercial cerium oxide suffers from the deactivation problems already after first reaction cycle, in-house CeO2 was prepared applying room temperature precipitation technique. Variation of the synthesis parameters such as synthesis time, calcination temperature and pH of the reaction solution turned to have considerable influence on the physico-chemical and catalytic properties of CeO2. The increase of the synthesis time resulted in high specific surface area of cerium oxide and catalyst prepared within 50 h exhibited the highest amount of basic sites on its surface. Furthermore, synthesis under pH 11 yielded cerium oxide with the highest specific surface area, 139 m2/g, among all prepared catalysts. Moreover, CeO2─pH11 catalyst demonstrated the best catalytic activity and 2 mmol of DEC was produced at 180 oC and 9 MPa of the final reaction pressure. In addition, ceria-supported onto high specific surface area silicas MCM-41, SBA-15 and silica gel were synthesized and tested for the first time as catalysts in the synthesis of DEC. Deposition of cerium oxide on MCM-41 and SiO2 supports resulted in a substantial increase of the alkalinity of the carrier materials. Hexagonal SBA-15 modified with 20 wt % of ceria exhibited the second highest basicity in the series of supported catalysts. Evaluation of the catalytic activity of ceria-supported catalysts showed that reaction carried out over 20 wt % CeO2-SBA-15 generated the highest amount of DEC.
Study of the advancement of innovations in communications industry. Case study: Russian Post Company
Resumo:
This study attempted to provide a project based on the already tested and successful results of foreign business which can help to contain the final price of innovation on desired levels. The research will attempt to dig out most of available information related to aforementioned definitions and thus completing theoretical background. Next author will explain used methodology and the process of evidence collection. After that the study will show the analysis of collected data in order to obtain results which are going to be compared with stated objectives in the final part. The conclusion of the research and proposed possibilities for additional work will be given in the last part. For this study author has chosen the qualitative model because it performs very well for analysis of small scale of data. The case study method was used because it gave author an opportunity to make an in-depth analysis of the collected information about particular organization so it became possible to analyze system's details in comparison. The results have been early considered valid and applicable to other studies. As the result thesis has proposed undertakings which reflect researches aimed on solving problems with provision of services and development of communications. In addition thesis has proposed formulation of database of postal service for Russian Post when (by request) customer possess an account where he or she can access postal services via PC or info table in postal office and order delivery of postal products which will be given private identification code. Project's payoff period has been calculated as well.
Resumo:
This thesis is done as a part of the NEOCARBON project. The aim of NEOCARBON project is to study a fully renewable energy system utilizing Power-to-Gas or Power-to-Liquid technology for energy storage. Power-to-Gas consists of two main operations: Hydrogen production via electrolysis and methane production via methanation. Methanation requires carbon dioxide and hydrogen as a raw material. This thesis studies the potential carbon dioxide sources within Finland. The different sources are ranked using the cost and energy penalty of the carbon capture, carbon biogenity and compatibility with Power-to-Gas. It can be concluded that in Finland there exists enough CO2 point sources to provide national PtG system with sufficient amounts of carbon. Pulp and paper industry is single largest producer of biogenic CO2 in Finland. It is possible to obtain single unit capable of grid balancing operations and energy transformations via Power-to-Gas and Gas-to-Power by coupling biogas plants with biomethanation and CHP units.
Resumo:
Finnish design and consulting companies are delivering robust and cost-efficient steel structures solutions to a large number of manufacturing companies worldwide. Recently introduced EN 1090-2 standard obliges these companies to specify the execution class of steel structures for their customers. This however, requires clarifying, understanding and interpreting the sophisticated procedure of execution class assignment. The objective of this research is to provide a clear explanation and guidance through the process of execution class assignment for a given steel structure and to support the implementation of EN 1090-2 standard in Rejlers Oy, one of Finnish design and consulting companies. This objective is accomplished by creating a guideline for designers that elaborates on the four-step process of the execution class assignment for a steel structure or its part. Steps one to three define the consequence class (projected consequences of structure failure), the service category (hazards associated with the service use exploitation of steel structure) and the production category (manufacturing process peculiarities), based on the ductility class (capacity of structure to withstand deformations) and the behaviour factor (corresponds to structure seismic behaviour). The final step is the execution class assignment taking into account results of previous steps. Main research method is indepth literature review of European standards family for steel structures. Other research approach is a series of interviews of Rejlers Oy representatives and its clients, results of which have been used to evaluate the level of EN 1090-2 awareness. Rejlers Oy will use the developed novel coherent standard implementation guideline to improve its services and to obtain greater customer satisfaction.
Resumo:
The production of biodiesel through transesterification has created a surplus of glycerol on the international market. In few years, glycerol has become an inexpensive and abundant raw material, subject to numerous plausible valorisation strategies. Glycerol hydrochlorination stands out as an economically attractive alternative to the production of biobased epichlorohydrin, an important raw material for the manufacturing of epoxy resins and plasticizers. Glycerol hydrochlorination using gaseous hydrogen chloride (HCl) was studied from a reaction engineering viewpoint. Firstly, a more general and rigorous kinetic model was derived based on a consistent reaction mechanism proposed in the literature. The model was validated with experimental data reported in the literature as well as with new data of our own. Semi-batch experiments were conducted in which the influence of the stirring speed, HCl partial pressure, catalyst concentration and temperature were thoroughly analysed and discussed. Acetic acid was used as a homogeneous catalyst for the experiments. For the first time, it was demonstrated that the liquid-phase volume undergoes a significant increase due to the accumulation of HCl in the liquid phase. Novel and relevant features concerning hydrochlorination kinetics, HCl solubility and mass transfer were investigated. An extended reaction mechanism was proposed and a new kinetic model was derived. The model was tested with the experimental data by means of regression analysis, in which kinetic and mass transfer parameters were successfully estimated. A dimensionless number, called Catalyst Modulus, was proposed as a tool for corroborating the kinetic model. Reactive flash distillation experiments were conducted to check the commonly accepted hypothesis that removal of water should enhance the glycerol hydrochlorination kinetics. The performance of the reactive flash distillation experiments were compared to the semi-batch data previously obtained. An unforeseen effect was observed once the water was let to be stripped out from the liquid phase, exposing a strong correlation between the HCl liquid uptake and the presence of water in the system. Water has revealed to play an important role also in the HCl dissociation: as water was removed, the dissociation of HCl was diminished, which had a retarding effect on the reaction kinetics. In order to obtain a further insight on the influence of water on the hydrochlorination reaction, extra semi-batch experiments were conducted in which initial amounts of water and the desired product were added. This study revealed the possibility to use the desired product as an ideal “solvent” for the glycerol hydrochlorination process. A co-current bubble column was used to investigate the glycerol hydrochlorination process under continuous operation. The influence of liquid flow rate, gas flow rate, temperature and catalyst concentration on the glycerol conversion and product distribution was studied. The fluid dynamics of the system showed a remarkable behaviour, which was carefully investigated and described. Highspeed camera images and residence time distribution experiments were conducted to collect relevant information about the flow conditions inside the tube. A model based on the axial dispersion concept was proposed and confronted with the experimental data. The kinetic and solubility parameters estimated from the semi-batch experiments were successfully used in the description of mass transfer and fluid dynamics of the bubble column reactor. In light of the results brought by the present work, the glycerol hydrochlorination reaction mechanism has been finally clarified. It has been demonstrated that the reactive distillation technology may cause drawbacks to the glycerol hydrochlorination reaction rate under certain conditions. Furthermore, continuous reactor technology showed a high selectivity towards monochlorohydrins, whilst semibatch technology was demonstrated to be more efficient towards the production of dichlorohydrins. Based on the novel and revealing discoveries brought by the present work, many insightful suggestions are made towards the improvement of the production of αγ-dichlorohydrin on an industrial scale.