1000 resultados para Rudestam, Kjell Erik: Surviving your dissertation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Träfibrernas naturliga egenskaper begränsar deras användning i många tillämpningar. Träfibrernas egenskaper kan modifieras genom att binda nya komponenter med önskade egenskaper till fiberns yta. DI Stina Grönqvist bevisade i sin avhandling att nya funktionella grupper kan bindas till ligninhaltiga träfibrer genom att aktivera ytligninet med lackasenzym. Resultaten kan utnyttjas till att förbättra de traditionella träfibrernas och fiberprodukternas egenskaper samt att hitta nya tillämpningar för träfibrerna. ”Om träfibrerna t.ex. modifieras så att de blir vattenavstötande kan de modifierade träfibrerna användas istället för plast i förpackningar” berättar Stina Grönqvist. Syftet med denna avhandling var att undersöka effekterna av lackasenzym på TMP (termomekanisk massa) och dess fraktioner. I Finland tillverkas TMP av gran och massan innehåller rikligt med lignin. När ytligninet på träfibrernas yta modifieras med hjälp av oxiderande enzymer, såsom lackas, bildas reaktiva radikaler i ligninen på fibrernas ytor. De bildade radikalerna kan utnyttjas till att binda komponenter med nya egenskaper till fiberytan. För att kunna utnyttja den fulla potentialen av den lackasbaserade modifieringsmetoden behövs mera information om så väl de faktorer som påverkar bildningen av radikaler som om mekanismerna hur de nya komponenterna binds till fibrerna ___________________________________ Puukuitujen luontaiset ominaisuudet rajoittavat niiden hyödyntämistä joissakin sovelluksissa. Ominaisuuksia voidaan kuitenkin muuttaa liittämällä kuidun pintaan uusia yhdisteitä. DI Stina Grönqvist osoitti väitöstyössään, että uusia kuitujen ominaisuuksia muuttavia funktionaalisia yhdisteitä voidaan sitoa ligniinipitoisiin puukuituihin aktivoimalla kuitujen pinnan ligniiniä lakkaasi-entsyymillä. Tutkimuksen tuloksia voidaan hyödyntää ligniinipitoisten puukuitujen ominaisuuksien parantamiseen ja jopa täysin uusien ominaisuuksien kehittämiseen. ”Jos kuituja muutetaan vettä hylkiväksi sitomalla kuituihin vettä hylkiviä yhdisteitä entsyymien avulla, voidaan puukuituja käyttää korvaamaan muovia pakkauksissa”, kertoo Stina Grönqvist. Väitöstyön kohteena oli TMP-massan eli kuumahiertämällä valmistetun mekaanisen puumassojen ja niiden fraktioiden muokkaaminen lakkaasi-entsyymillä (TMP, Thermomechanical pulp). Suomalainen TMP-massa valmistetaan kuusesta ja siinä on runsaasti muokkaamatonta ligniiniä. Kun puukuidun pinnan ligniiniä muokataan hapettavilla entsyymeillä, muodostuu kuidun pintaan reaktiivisia radikaaleja. Syntyneiden radikaalien avulla kuituihin voidaan liittää yhdisteitä, jotka antavat kuidulle uusia ominaisuuksia. Menetelmän tarjoamien mahdollisuuksien hyödyntämiseksi tarvitaan tietoa kuidun radikalisointiin ja yhdisteiden liittämiseen vaikuttavista tekijöistä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heat shock factors (HSFs) are an evolutionarily well conserved family of transcription factors that coordinate stress-induced gene expression and direct versatile physiological processes in eukaryote organisms. The essentiality of HSFs for cellular homeostasis has been well demonstrated, mainly through HSF1-induced transcription of heat shock protein (HSP) genes. HSFs are important regulators of many fundamental processes such as gametogenesis, metabolic control and aging, and are involved in pathological conditions including cancer progression and neurodegenerative diseases. In each of the HSF-mediated processes, however, the detailed mechanisms of HSF family members and their complete set of target genes have remained unknown. Recently, rapid advances in chromatin studies have enabled genome-wide characterization of protein binding sites in a high resolution and in an unbiased manner. In this PhD thesis, these novel methods that base on chromatin immunoprecipitation (ChIP) are utilized and the genome-wide target loci for HSF1 and HSF2 are identified in cellular stress responses and in developmental processes. The thesis and its original publications characterize the individual and shared target genes of HSF1 and HSF2, describe HSF1 as a potent transactivator, and discover HSF2 as an epigenetic regulator that coordinates gene expression throughout the cell cycle progression. In male gametogenesis, novel physiological functions for HSF1 and HSF2 are revealed and HSFs are demonstrated to control the expression of X- and Y-chromosomal multicopy genes in a silenced chromatin environment. In stressed human cells, HSF1 and HSF2 are shown to coordinate the expression of a wide variety of genes including genes for chaperone machinery, ubiquitin, regulators of cell cycle progression and signaling. These results highlight the importance of cell type and cell cycle phase in transcriptional responses, reveal the myriad of processes that are adjusted in a stressed cell and describe novel mechanisms that maintain transcriptional memory in mitotic cell division.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methyl chloride is an important chemical intermediate with a variety of applications. It is produced today in large units and shipped to the endusers. Most of the derived products are harmless, as silicones, butyl rubber and methyl cellulose. However, methyl chloride is highly toxic and flammable. On-site production in the required quantities is desirable to reduce the risks involved in transportation and storage. Ethyl chloride is a smaller-scale chemical intermediate that is mainly used in the production of cellulose derivatives. Thus, the combination of onsite production of methyl and ethyl chloride is attractive for the cellulose processing industry, e.g. current and future biorefineries. Both alkyl chlorides can be produced by hydrochlorination of the corresponding alcohol, ethanol or methanol. Microreactors are attractive for the on-site production as the reactions are very fast and involve toxic chemicals. In microreactors, the diffusion limitations can be suppressed and the process safety can be improved. The modular setup of microreactors is flexible to adjust the production capacity as needed. Although methyl and ethyl chloride are important chemical intermediates, the literature available on potential catalysts and reaction kinetics is limited. Thus the thesis includes an extensive catalyst screening and characterization, along with kinetic studies and engineering the hydrochlorination process in microreactors. A range of zeolite and alumina based catalysts, neat and impregnated with ZnCl2, were screened for the methanol hydrochlorination. The influence of zinc loading, support, zinc precursor and pH was investigated. The catalysts were characterized with FTIR, TEM, XPS, nitrogen physisorption, XRD and EDX to identify the relationship between the catalyst characteristics and the activity and selectivity in the methyl chloride synthesis. The acidic properties of the catalyst were strongly influenced upon the ZnCl2 modification. In both cases, alumina and zeolite supports, zinc reacted to a certain amount with specific surface sites, which resulted in a decrease of strong and medium Brønsted and Lewis acid sites and the formation of zinc-based weak Lewis acid sites. The latter are highly active and selective in methanol hydrochlorination. Along with the molecular zinc sites, bulk zinc species are present on the support material. Zinc modified zeolite catalysts exhibited the highest activity also at low temperatures (ca 200 °C), however, showing deactivation with time-onstream. Zn/H-ZSM-5 zeolite catalysts had a higher stability than ZnCl2 modified H-Beta and they could be regenerated by burning the coke in air at 400 °C. Neat alumina and zinc modified alumina catalysts were active and selective at 300 °C and higher temperatures. However, zeolite catalysts can be suitable for methyl chloride synthesis at lower temperatures, i.e. 200 °C. Neat γ-alumina was found to be the most stable catalyst when coated in a microreactor channel and it was thus used as the catalyst for systematic kinetic studies in the microreactor. A binder-free and reproducible catalyst coating technique was developed. The uniformity, thickness and stability of the coatings were extensively characterized by SEM, confocal microscopy and EDX analysis. A stable coating could be obtained by thermally pretreating the microreactor platelets and ball milling the alumina to obtain a small particle size. Slurry aging and slow drying improved the coating uniformity. Methyl chloride synthesis from methanol and hydrochloric acid was performed in an alumina-coated microreactor. Conversions from 4% to 83% were achieved in the investigated temperature range of 280-340 °C. This demonstrated that the reaction is fast enough to be successfully performed in a microreactor system. The performance of the microreactor was compared with a tubular fixed bed reactor. The results obtained with both reactors were comparable, but the microreactor allows a rapid catalytic screening with low consumption of chemicals. As a complete conversion of methanol could not be reached in a single microreactor, a second microreactor was coupled in series. A maximum conversion of 97.6 % and a selectivity of 98.8 % were reached at 340°C, which is close to the calculated values at a thermodynamic equilibrium. A kinetic model based on kinetic experiments and thermodynamic calculations was developed. The model was based on a Langmuir Hinshelwood-type mechanism and a plug flow model for the microreactor. The influence of the reactant adsorption on the catalyst surface was investigated by performing transient experiments and comparing different kinetic models. The obtained activation energy for methyl chloride was ca. two fold higher than the previously published, indicating diffusion limitations in the previous studies. A detailed modeling of the diffusion in the porous catalyst layer revealed that severe diffusion limitations occur starting from catalyst coating thicknesses of 50 μm. At a catalyst coating thickness of ca 15 μm as in the microreactor, the conditions of intrinsic kinetics prevail. Ethanol hydrochlorination was performed successfully in the microreactor system. The reaction temperature was 240-340°C. An almost complete conversion of ethanol was achieved at 340°C. The product distribution was broader than for methanol hydrochlorination. Ethylene, diethyl ether and acetaldehyde were detected as by-products, ethylene being the most dominant by-product. A kinetic model including a thorough thermodynamic analysis was developed and the influence of adsorbed HCl on the reaction rate of ethanol dehydration reactions was demonstrated. The separation of methyl chloride using condensers was investigated. The proposed microreactor-condenser concept enables the production of methyl chloride with a high purity of 99%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En populär idé inom dagens filosofiska och psykologiska forskning om interpersonlig förståelse, är idén att vi använder en kognitiv funktion (eller metod) för att förstå andra människor, en så kallad ”theory of mind” funktion. Denna idé förekommer inom ett brett vetenskapligt fält så som inom evolutionspsykologi, inom teorier om barns utveckling, inom teorier om autism, samt inom emotionsfilosofi och moralfilosofi. Avsikten i denna studie är att se närmare på vissa inflytelserika filosofiska och psykologiska teorier om interpersonlig förståelse, teorier som också har en stark koppling till empirisk forskning. I arbetet hävdar Gustafsson att teorierna ifråga avspeglar vissa klassiska, filosofiskt problematiska, antaganden. Dessa antaganden präglar teorierna ifråga samt påverkar hur de empiriska undersökningarna byggs upp och hur resultat tolkas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ionic liquids, ILs, have recently been studied with accelerating interest to be used for a deconstruction/fractionation, dissolution or pretreatment processing method of lignocellulosic biomass. ILs are usually utilized combined with heat. Regarding lignocellulosic recalcitrance toward fractionation and IL utilization, most of the studies concern IL utilization in the biomass fermentation process prior to the enzymatic hydrolysis step. It has been demonstrated that IL-pretreatment gives more efficient hydrolysis of the biomass polysaccharides than enzymatic hydrolysis alone. Both cellulose (especially cellulose) and lignin are very resistant towards fractionation and even dissolution methods. As an example, it can be mentioned that softwood, hardwood and grass-type plant species have different types of lignin structures leading to the fact that softwood lignin (guaiacyl lignin dominates) is the most difficult to solubilize or chemically disrupt. In addition to the known conventional biomass processing methods, several ILs have also been found to efficiently dissolve either cellulose and/or wood samples – different ILs are suitable for different purposes. An IL treatment of wood usually results in non-fibrous pulp, where lignin is not efficiently separated and wood components are selectively precipitated, as cellulose is not soluble or degradable in ionic liquids under mild conditions. Nevertheless, new ILs capable of rather good fractionation performance have recently emerged. The capability of the IL to dissolve or deconstruct wood or cellulose depends on several factors, (e.g. sample origin, the particle size of the biomass, mechanical treatments as pulverization, initial biomassto-IL ratio, water content of the biomass, possible impurities of IL, reaction conditions, temperature etc). The aim of this study was to obtain (fermentable) saccharides and other valuable chemicals from wood by a combined heat and IL-treatment. Thermal treatments alone contribute to the degradation of polysaccharides (e.g. 150 °C alone is said to cause the degradation of polysaccharides), thus temperatures below that should be used, if the research interest lies on the IL effectiveness. On the other hand, the efficiency of the IL-treatment can also be enhanced to combine other treatment methods, (e.g. microwave heating). The samples of spruce, pine and birch sawdust were treated with either 1-Ethyl-3-methylimidazolium chloride, Emim Cl, or 1-Ethyl-3-methylimidazolium acetate, Emim Ac, (or with ionized water for comparison) at various temperatures (where focus was between 80 and 120 °C). The samples were withdrawn at fixed time intervals (the main interest treatment time area lied between 0 and 100 hours). Double experiments were executed. The selected mono- and disaccharides, as well as their known degradation products, 5-hydroxymethylfurfural, 5-HMF, and furfural were analyzed with capillary electrophoresis, CE, and high-performance liquid chromatography, HPLC. Initially, even GC and GC-MS were utilized. Galactose, glucose, mannose and xylose were the main monosaccharides that were present in the wood samples exposed to ILs at elevated temperatures; in addition, furfural and 5-HMF were detected; moreover, the quantitative amount of the two latter ones were naturally increasing in line with the heating time or the IL:wood ratio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Concepts, models, or theories that end up shaping practices, whether those practices fall in the domains of science, technology, social movements, or business, always emerge through a change in language use. First, communities begin to talk differently, incorporating new vocabularies (Rorty, 1989), in their narratives. Whether the community’s new narratives respond to perceived anomalies or failures of the existing ones (Kuhn, 1962) or actually reveal inadequacies by addressing previously unrecognized practices (Fleck, 1979; Rorty, 1989) is less important here than the very phenomena that they introduce differences. Then, if the new language proves to be useful, for example, because it helps the community solve a problem or create a possibility that existing narratives do not, the new narrative will begin circulating more broadly throughout the community. If other communities learn of the usefulness of these new narratives, and find them sufficiently persuasive, they may be compelled to test, modify, and eventually adopt them. Of primary importance is the idea that a new concept or narrative perceived as useful is more likely to be adopted. We can expect that business concepts emerge through a similar pattern. Concepts such as “competitive advantage,” “disruption,” and the “resource based view,” now broadly known and accepted, were each at some point first introduced by a community. This community experimented with the concepts they introduced and found them useful. The concept “competitive advantage,” for example, helped researchers better explain why some firm’s outperformed others and helped practitioners more clearly understand what choices to make to improve the profit and growth prospects of their firms. The benefits of using these terms compelled other communities to consider, apply, and eventually adopt them as well. Were these terms not viewed as useful, they would not likely have been adopted. This thesis attempts to observe and anticipate new business concepts that may be emerging. It does so by seeking to observe a community of business practitioners that are using different language and appear to be more successful than a similar community of practitioners that are have not yet begun using this different language as extensively. It argues that if the community that is adopting new types of narratives is perceived as being more successful, their success will attract the attention of other communities who may then seek to adopt the same narratives. Specifically, this thesis compares the narratives used by a set of firms that are considered to be performing well (called Winners) with those of set of less-successful peers (called Losers). It does so with the aim of addressing two questions: - How do the strategic narratives that circulate within “winning” companies and their leaders differ from those circulating within “losing” companies and their leaders? - Given the answer to the first question: what new business strategy concepts are likely to emerge in the business community at large? I expected to observe “winning” companies shifting their language, abandoning an older set of narratives for newer ones. However the analysis indicates a more interesting dynamic: “winning” companies adopt the same core narratives as their “losing” peers with equal frequency yet they go beyond these. Both “winners” and “losers” seem to pursue economies of scale, customer captivity, best practices, and securing preferential access to resources with similar vigor. But “winners” seem to go further, applying three additional narratives in their pursuits of competitive advantage. They speak of coordinating what is uncoordinated, adopting what this thesis calls “exchanging the role of guest for that of host,” and “forcing a two-front battle” more frequently than their “loser” peers. Since these “winning” companies are likely perceived as being more successful, the unique narratives they use are more likely to be emulated and adopted. Understanding in what ways winners speak differently, therefore, gives us a glimpse into the possible future evolution of business concepts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decreasing fossil fuel resources combined with an increasing world energy demand has raised an interest in renewable energy sources. The alternatives can be solar, wind and geothermal energies, but only biomass can be a substitute for the carbon–based feedstock, which is suitable for the production of transportation fuels and chemicals. However, a high oxygen content of the biomass creates challenges for the future chemical industry, forcing the development of new processes which allow a complete or selective oxygen removal without any significant carbon loss. Therefore, understanding and optimization of biomass deoxygenation processes are crucial for the future bio–based chemical industry. In this work, deoxygenation of fatty acids and their derivatives was studied over Pd/C and TiO2 supported noble metal catalysts (Pt, Pt–Re, Re and Ru) to obtain future fuel components. The 5 % Pd/C catalyst was investigated in semibatch and fixed bed reactors at 300 °C and 1.7–2 MPa of inert and hydrogen–containing atmospheres. Based on extensive kinetic studies, plausible reaction mechanisms and pathways were proposed. The influence of the unsaturation in the deoxygenation of model compounds and industrial feedstock – tall oil fatty acids – over a Pd/C catalyst was demonstrated. The optimization of the reaction conditions suppressed the formation of by–products, hence high yields and selectivities towards linear hydrocarbons and catalyst stability were achieved. Experiments in a fixed bed reactor filled with a 2 % Pd/C catalyst were performed with stearic acid as a model compound at different hydrogen–containing gas atmospheres to understand the catalyst stability under various conditions. Moreover, prolonged experiments were carried out with concentrated model compounds to reveal the catalyst deactivation. New materials were proposed for the selective deoxygenation process at lower temperatures (~200 °C) with a tunable selectivity to hydrodeoxygenation by using 4 % Pt/TiO2 or decarboxylation/decarbonylation over 4 % Ru/TiO2 catalysts. A new method for selective hydrogenation of fatty acids to fatty alcohols was demonstrated with a 4 % Re/TiO2 catalyst. A reaction pathway and mechanism for TiO2 supported metal catalysts was proposed and an optimization of the process conditions led to an increase in the formation of the desired products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent decades, business intelligence (BI) has gained momentum in real-world practice. At the same time, business intelligence has evolved as an important research subject of Information Systems (IS) within the decision support domain. Today’s growing competitive pressure in business has led to increased needs for real-time analytics, i.e., so called real-time BI or operational BI. This is especially true with respect to the electricity production, transmission, distribution, and retail business since the law of physics determines that electricity as a commodity is nearly impossible to be stored economically, and therefore demand-supply needs to be constantly in balance. The current power sector is subject to complex changes, innovation opportunities, and technical and regulatory constraints. These range from low carbon transition, renewable energy sources (RES) development, market design to new technologies (e.g., smart metering, smart grids, electric vehicles, etc.), and new independent power producers (e.g., commercial buildings or households with rooftop solar panel installments, a.k.a. Distributed Generation). Among them, the ongoing deployment of Advanced Metering Infrastructure (AMI) has profound impacts on the electricity retail market. From the view point of BI research, the AMI is enabling real-time or near real-time analytics in the electricity retail business. Following Design Science Research (DSR) paradigm in the IS field, this research presents four aspects of BI for efficient pricing in a competitive electricity retail market: (i) visual data-mining based descriptive analytics, namely electricity consumption profiling, for pricing decision-making support; (ii) real-time BI enterprise architecture for enhancing management’s capacity on real-time decision-making; (iii) prescriptive analytics through agent-based modeling for price-responsive demand simulation; (iv) visual data-mining application for electricity distribution benchmarking. Even though this study is from the perspective of the European electricity industry, particularly focused on Finland and Estonia, the BI approaches investigated can: (i) provide managerial implications to support the utility’s pricing decision-making; (ii) add empirical knowledge to the landscape of BI research; (iii) be transferred to a wide body of practice in the power sector and BI research community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cell is continuously subjected to various forms of external and intrinsic proteindamaging stresses, including hyperthermia, pathophysiological states, as well as cell differentiation and proliferation. Proteindamaging stresses result in denaturation and improper folding of proteins, leading to the formation of toxic aggregates that are detrimental for various pathological conditions, including Alzheimer’s and Huntington’s diseases. In order to maintain protein homeostasis, cells have developed different cytoprotective mechanisms, one of which is the evolutionary well-conserved heat shock response. The heat shock response results in the expression of heat shock proteins (Hsps), which act as molecular chaperones that bind to misfolded proteins, facilitate their refolding and prevent the formation of protein aggregates. Stress-induced expression of Hsps is mediated by a family of transcription factors, the heat shock factors, HSFs. Of the four HSFs found in vertebrates, HSF1-4, HSF1 is the major stress-responsive factor that is required for the induction of the heat shock response. HSF2 cannot alone induce Hsps, but modulates the heat shock response by forming heterotrimers with HSF1. HSFs are not only involved in the heat shock response, but they have also been found to have a function in development, neurodegenerative disorders, cancer, and longevity. Therefore, insight into how HSFs are regulated is important for the understanding of both normal physiological and disease processes. The activity of HSF1 is mainly regulated by intricate post-translational modifications, whereas the activity of HSF2 is concentrationdependent. However, there is only limited understanding of how the abundance of HSF2 is regulated. This study describes two different means of how HSF2 levels are regulated. In the first study it was shown that microRNA miR-18, a member of the miR-17~92 cluster, directly regulates Hsf2 mRNA stability and thus protein levels. HSF2 has earlier been shown to play a profound role in the regulation of male germ cell maturation during the spermatogenesis. The effect on miR-18 on HSF2 was examined in vivo by transfecting intact seminiferous tubules, and it was found that inhibition of miR-18 resulted in increased HSF2 levels and modified expression of the HSF2 targets Ssty2 and Speer4a. HSF2 has earlier been reported to modulate the heat shock response by forming heterotrimers with HSF1. In the second study, it was shown that HSF2 is cleared off the Hsp70 promoter and degraded by the ubiquitinproteasome pathway upon acute stress. By silencing components of the anaphase promoting complex/cyclosome (APC/C), including the co-activators Cdc20 and Cdh1, it was shown that APC/C mediates the heatinduced ubiquitylation of HSF2. Furthermore, down-regulation of Cdc20 was shown to alter the expression of heat shock-responsive genes. Next, we studied if APC/C-Cdc20, which controls cell cycle progression, also regulates HSF2 during the cell cycle. We found that both HSF2 mRNA and protein levels decreased during mitosis in several but not all human cell lines, indicating that HSF2 has a function in mitotic cells. Interestingly, although transcription is globally repressed during mitosis, mainly due to the displacement of RNA polymerase II and transcription factors, including HSF1, from the mitotic chromatin, HSF2 is capable of binding DNA during mitosis. Thus, during mitosis the heat shock response is impaired, leaving mitotic cells vulnerable to proteotoxic stress. However, in HSF2-deficient mitotic cells the Hsp70 promoter is accessible to both HSF1 and RNA polymerase II, allowing for stress-inducible Hsp expression to occur. As a consequence HSF2-deficient mitotic cells have a survival advantage upon acute heat stress. The results, presented in this thesis contribute to the understanding of the regulatory mechanisms of HSF2 and its function in the heat shock response in both interphase and mitotic cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personalized nanomedicine has been shown to provide advantages over traditional clinical imaging, diagnosis, and conventional medical treatment. Using nanoparticles can enhance and clarify the clinical targeting and imaging, and lead them exactly to the place in the body that is the goal of treatment. At the same time, one can reduce the side effects that usually occur in the parts of the body that are not targets for treatment. Nanoparticles are of a size that can penetrate into cells. Their surface functionalization offers a way to increase their sensitivity when detecting target molecules. In addition, it increases the potential for flexibility in particle design, their therapeutic function, and variation possibilities in diagnostics. Mesoporous nanoparticles of amorphous silica have attractive physical and chemical characteristics such as particle morphology, controllable pore size, and high surface area and pore volume. Additionally, the surface functionalization of silica nanoparticles is relatively straightforward, which enables optimization of the interaction between the particles and the biological system. The main goal of this study was to prepare traceable and targetable silica nanoparticles for medical applications with a special focus on particle dispersion stability, biocompatibility, and targeting capabilities. Nanoparticle properties are highly particle-size dependent and a good dispersion stability is a prerequisite for active therapeutic and diagnostic agents. In the study it was shown that traceable streptavidin-conjugated silica nanoparticles which exhibit a good dispersibility could be obtained by the suitable choice of a proper surface functionalization route. Theranostic nanoparticles should exhibit sufficient hydrolytic stability to effectively carry the medicine to the target cells after which they should disintegrate and dissolve. Furthermore, the surface groups should stay at the particle surface until the particle has been internalized by the cell in order to optimize cell specificity. Model particles with fluorescently-labeled regions were tested in vitro using light microscopy and image processing technology, which allowed a detailed study of the disintegration and dissolution process. The study showed that nanoparticles degrade more slowly outside, as compared to inside the cell. The main advantage of theranostic agents is their successful targeting in vitro and in vivo. Non-porous nanoparticles using monoclonal antibodies as guiding ligands were tested in vitro in order to follow their targeting ability and internalization. In addition to the targeting that was found successful, a specific internalization route for the particles could be detected. In the last part of the study, the objective was to clarify the feasibility of traceable mesoporous silica nanoparticles, loaded with a hydrophobic cancer drug, being applied for targeted drug delivery in vitro and in vivo. Particles were provided with a small molecular targeting ligand. In the study a significantly higher therapeutic effect could be achieved with nanoparticles compared to free drug. The nanoparticles were biocompatible and stayed in the tumor for a longer time than a free medicine did, before being eliminated by renal excretion. Overall, the results showed that mesoporous silica nanoparticles are biocompatible, biodegradable drug carriers and that cell specificity can be achieved both in vitro and in vivo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The steel industry produces, besides steel, also solid mineral by-products or slags, while it emits large quantities of carbon dioxide (CO2). Slags consist of various silicates and oxides which are formed in chemical reactions between the iron ore and the fluxing agents during the high temperature processing at the steel plant. Currently, these materials are recycled in the ironmaking processes, used as aggregates in construction, or landfilled as waste. The utilization rate of the steel slags can be increased by selectively extracting components from the mineral matrix. As an example, aqueous solutions of ammonium salts such as ammonium acetate, chloride and nitrate extract calcium quite selectively already at ambient temperature and pressure conditions. After the residual solids have been separated from the solution, calcium carbonate can be precipitated by feeding a CO2 flow through the solution. Precipitated calcium carbonate (PCC) is used in different applications as a filler material. Its largest consumer is the papermaking industry, which utilizes PCC because it enhances the optical properties of paper at a relatively low cost. Traditionally, PCC is manufactured from limestone, which is first calcined to calcium oxide, then slaked with water to calcium hydroxide and finally carbonated to PCC. This process emits large amounts of CO2, mainly because of the energy-intensive calcination step. This thesis presents research work on the scale-up of the above-mentioned ammonium salt based calcium extraction and carbonation method, named Slag2PCC. Extending the scope of the earlier studies, it is now shown that the parameters which mainly affect the calcium utilization efficiency are the solid-to-liquid ratio of steel slag and the ammonium salt solvent solution during extraction, the mean diameter of the slag particles, and the slag composition, especially the fractions of total calcium, silicon, vanadium and iron as well as the fraction of free calcium oxide. Regarding extraction kinetics, slag particle size, solid-to-liquid ratio and molar concentration of the solvent solution have the largest effect on the reaction rate. Solvent solution concentrations above 1 mol/L NH4Cl cause leaching of other elements besides calcium. Some of these such as iron and manganese result in solution coloring, which can be disadvantageous for the quality of the PCC product. Based on chemical composition analysis of the produced PCC samples, however, the product quality is mainly similar as in commercial products. Increasing the novelty of the work, other important parameters related to assessment of the PCC quality, such as particle size distribution and crystal morphology are studied as well. As in traditional PCC precipitation process, the ratio of calcium and carbonate ions controls the particle shape; a higher value for [Ca2+]/[CO32-] prefers precipitation of calcite polymorph, while vaterite forms when carbon species are present in excess. The third main polymorph, aragonite, is only formed at elevated temperatures, above 40-50 °C. In general, longer precipitation times cause transformation of vaterite to calcite or aragonite, but also result in particle agglomeration. The chemical equilibrium of ammonium and calcium ions and dissolved ammonia controlling the solution pH affects the particle sizes, too. Initial pH of 12-13 during the carbonation favors nonagglomerated particles with a diameter of 1 μm and smaller, while pH values of 9-10 generate more agglomerates of 10-20 μm. As a part of the research work, these findings are implemented in demonstrationscale experimental process setups. For the first time, the Slag2PCC technology is tested in scale of ~70 liters instead of laboratory scale only. Additionally, design of a setup of several hundreds of liters is discussed. For these purposes various process units such as inclined settlers and filters for solids separation, pumps and stirrers for material transfer and mixing as well as gas feeding equipment are dimensioned and developed. Overall emissions reduction of the current industrial processes and good product quality as the main targets, based on the performed partial life cycle assessment (LCA), it is most beneficial to utilize low concentration ammonium salt solutions for the Slag2PCC process. In this manner the post-treatment of the products does not require extensive use of washing and drying equipment, otherwise increasing the CO2 emissions of the process. The low solvent concentration Slag2PCC process causes negative CO2 emissions; thus, it can be seen as a carbon capture and utilization (CCU) method, which actually reduces the anthropogenic CO2 emissions compared to the alternative of not using the technology. Even if the amount of steel slag is too small for any substantial mitigation of global warming, the process can have both financial and environmental significance for individual steel manufacturers as a means to reduce the amounts of emitted CO2 and landfilled steel slag. Alternatively, it is possible to introduce the carbon dioxide directly into the mixture of steel slag and ammonium salt solution. The process would generate a 60-75% pure calcium carbonate mixture, the remaining 25-40% consisting of the residual steel slag. This calcium-rich material could be re-used in ironmaking as a fluxing agent instead of natural limestone. Even though this process option would require less process equipment compared to the Slag2PCC process, it still needs further studies regarding the practical usefulness of the products. Nevertheless, compared to several other CO2 emission reduction methods studied around the world, the within this thesis developed and studied processes have the advantage of existing markets for the produced materials, thus giving also a financial incentive for applying the technology in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Initially identified as stress activated protein kinases (SAPKs), the c-Jun Nterminal kinases (JNKs) are currently accepted as potent regulators of various physiologically important cellular events. Named after their competence to phosphorylate transcription factor c-Jun in response to UVtreatment, JNKs play a key role in cell proliferation, cell death or cell migration. Interestingly, these functions are crucial for proper brain formation. The family consists of three JNK isoforms, JNK1, JNK2 and JNK3. Unlike brain specific JNK3 isoform, JNK1 and JNK2 are ubiquitously expressed. It is estimated that ten splice variants exist. However, the detailed cellular functions of these remain undetermined. In addition, physiological conditions keep the activities of JNK2 and JNK3 low in comparison with JNK1, whereas cellular stress raises the activity of these isoforms dramatically. Importantly, JNK1 activity is constitutively high in neurons, yet it does not stimulate cell death. This suggests a valuable role for JNK1 in brain development, but also as an important mediator of cell wellbeing. The aim of this thesis was to characterize the functional relationship between JNK1 and SCG10. We found that SCG10 is a bona fide target for JNK. By employing differential centrifugation we showed that SCG10 co-localized with active JNK, MKK7 and JIP1 in a fraction containing endosomes and Golgi vesicles. Investigation of JNK knockout tissues using phosphospecific antibodies recognizing JNK-specific phosphorylation sites on SCG10 (Ser 62/Ser 73) showed that phosphorylation of endogenous SCG10 was dramatically decreased in Jnk1-/- brains. Moreover, we found that JNK and SCG10 co-express during early embryonic days in brain regions that undergo extensive neuronal migration. Our study revealed that selective inhibition of JNK in the cytoplasm significantly increased both the frequency of exit from the multipolar stage and radial migration rate. However, as a consequence, it led to ill-defined cellular organization. Furthermore, we found that multipolar exit and radial migration in Jnk1 deficient mice can be connected to changes in phosphorylation state of SCG10. Also, the expression of a pseudo-phosphorylated mutant form of SCG10, mimicking the JNK1- phopshorylated form, brings migration rate back to normal in Jnk1 knockout mouse embryos. Furthermore, we investigated the role of SCG10 and JNK in regulation of Golgi apparatus (GA) biogenesis and whether pathological JNK action could be discernible by its deregulation. We found that SCG10 maintains GA integrity as with the absence of SCG10 neurons present more compact fragmented GA structure, as shown by the knockdown approach. Interestingly, neurons isolated from Jnk1-/- mice show similar characteristics. Block of ER to GA is believed to be involved in development of Parkinson's disease. Hence, by using a pharmacological approach (Brefeldin A treatment), we showed that GA recovery is delayed upon removal of the drug in Jnk1-/- neurons to an extent similar to the shRNA SCG10-treated cells. Finally, we investigated the role of the JNK1-SCG10 duo in the maintenance of GA biogenesis following excitotoxic insult. Although the GA underwent fragmentation in response to NMDA treatment, we observed a substantial delay in GA disintegration in neurons lacking either JNK1 or SCG10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Paper-based analytical technologies enable quantitative and rapid analysis of analytes from various application areas including healthcare, environmental monitoring and food safety. Because paper is a planar, flexible and light weight substrate, the devices can be transported and disposed easily. Diagnostic devices are especially valuable in resourcelimited environments where diagnosis as well as monitoring of therapy can be made even without electricity by using e.g. colorimetric assays. On the other hand, platforms including printed electrodes can be coupled with hand-held readers. They enable electrochemical detection with improved reliability, sensitivity and selectivity compared with colorimetric assays. In this thesis, different roll-to-roll compatible printing technologies were utilized for the fabrication of low-cost paper-based sensor platforms. The platforms intended for colorimetric assays and microfluidics were fabricated by patterning the paper substrates with hydrophobic vinyl substituted polydimethylsiloxane (PDMS) -based ink. Depending on the barrier properties of the substrate, the ink either penetrates into the paper structure creating e.g. microfluidic channel structures or remains on the surface creating a 2D analog of a microplate. The printed PDMS can be cured by a roll-ro-roll compatible infrared (IR) sintering method. The performance of these platforms was studied by printing glucose oxidase-based ink on the PDMS-free reaction areas. The subsequent application of the glucose analyte changed the colour of the white reaction area to purple with the colour density and intensity depending on the concentration of the glucose solution. Printed electrochemical cell platforms were fabricated on paper substrates with appropriate barrier properties by inkjet-printing metal nanoparticle based inks and by IR sintering them into conducting electrodes. Printed PDMS arrays were used for directing the liquid analyte onto the predetermined spots on the electrodes. Various electrochemical measurements were carried out both with the bare electrodes and electrodes functionalized with e.g. self assembled monolayers. Electrochemical glucose sensor was selected as a proof-of-concept device to demonstrate the potential of the printed electronic platforms.