14 resultados para cascade of pi-circuits
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
Deregulated proliferation has been recognized among the most important factors promoting breast cancer development and progression. The aim of the project is to gain understanding of the role of specific cell cycle regulators of metaphase-anaphase transition and evaluate their potential in breast cancer prognostication and treatment decisions. Metaphase-anaphase transition is triggered by activation of anaphase promoting complex (APC) which is activated by a cascade of regulatory proteins, among them securin, Cdc20 and Cdc27. These proteins promote the metaphase–anaphase transition and participate in the timely separation of the chromatids. This study is based on a patient material of approximately 600 breast cancer patients and up to 22 years of follow-up. As the main observation, based on DNA cytometric and immunohistochemical methods, securin, Cdc20 and Cdc27 protein expressions were associated with abnormal DNA content and outcome of breast cancer. In the studied patient material, high securin expression alone and in combination with Cdc20 and Cdc27 predicted up to 9.8-fold odds for aneuploid DNA content in human breast cancer. In Kaplan–Meier analyses, high expression of securin systematically indicated decrease in breast cancer survival as compared to low expression cases. The adverse effect of high securin expression was further strengthened by combining it with Cdc20 or Cdc27 expressions, resulting in up to 6.8-fold risk of breast cancer death. High securin and Cdc20 expression was also associated with triple-negative breast cancer type with high statistical significance. Securin, Cdc20 or Cdc27 have not previously been investigated in a clinically relevant large breast cancer patient material or in association with DNA ploidy. The present findings suggest that the studied proteins may serve as potential biomarkers for identification of aggressive course of disease and unfavourable outcome of human breast cancer, and that they may provide a future research aim for understanding abnormal proliferation in malignant disease.
Resumo:
Tässä insinöörityössä suunniteltiin Helsingin ammattikorkeakoululle jakeluverkoissa tapahtuvien oikosulkujen symmetristen komponenttien laskutavan havainnollistamiseen sopiva sijaiskytkentä. Sijaiskytkennässä tärkeitä huomioitavia asioita olivat mm. jännitetaso, havainnollistavien muuntajien oikosulkukestoisuus, jatkokäyttö laboratoriotyönä ja yleinen havainnollistavuus. Työssä on aluksi perehdytty symmetristen komponenttien ja jakeluverkoissa tapahtuvien oikosulkujen teoriaan. Tämän jälkeen mitoitettiin tarvittavan kytkennän komponenttien jännite- ja virtakestoisuudet mahdolliset lisäkäytöt huomioiden. Näiden rajoitusten mukaan perusteella työtä ruvettiin toteuttamaan. Työssä tilattiin sähkön 40 V:n pääjännitetasolle alentava muuntaja syöttämään oikosulun kestävää muuntajaa, jolla simuloitiin jakeluverkon yleisimpiä vikatyyppejä. Jälkimmäiselle muuntajalle mitoitettiin ja hankittiin sisäistä impedanssia vastaava induktanssi. Tämän avulla rakennettiin kokonaisuus, jonka avulla voidaan simuloida kaikkia tapahtuvia oikosulkuja vastaavat sijaiskytkennät. Työhön jätettiin kehittämisvaraa ja muita laboratoriotyön rakentamismahdollisuuksia tulevien insinööritöiden tekijöille.
Resumo:
Tehoelektoniikkalaitteella tarkoitetaan ohjaus- ja säätöjärjestelmää, jolla sähköä muokataan saatavilla olevasta muodosta haluttuun uuteen muotoon ja samalla hallitaan sähköisen tehon virtausta lähteestä käyttökohteeseen. Tämä siis eroaa signaalielektroniikasta, jossa sähköllä tyypillisesti siirretään tietoa hyödyntäen eri tiloja. Tehoelektroniikkalaitteita vertailtaessa katsotaan yleensä niiden luotettavuutta, kokoa, tehokkuutta, säätötarkkuutta ja tietysti hintaa. Tyypillisiä tehoelektroniikkalaitteita ovat taajuudenmuuttajat, UPS (Uninterruptible Power Supply) -laitteet, hitsauskoneet, induktiokuumentimet sekä erilaiset teholähteet. Perinteisesti näiden laitteiden ohjaus toteutetaan käyttäen mikroprosessoreja, ASIC- (Application Specific Integrated Circuit) tai IC (Intergrated Circuit) -piirejä sekä analogisia säätimiä. Tässä tutkimuksessa on analysoitu FPGA (Field Programmable Gate Array) -piirien soveltuvuutta tehoelektroniikan ohjaukseen. FPGA-piirien rakenne muodostuu erilaisista loogisista elementeistä ja niiden välisistä yhdysjohdoista.Loogiset elementit ovat porttipiirejä ja kiikkuja. Yhdysjohdot ja loogiset elementit ovat piirissä kiinteitä eikä koostumusta tai lukumäärää voi jälkikäteen muuttaa. Ohjelmoitavuus syntyy elementtien välisistä liitännöistä. Piirissä on lukuisia, jopa miljoonia kytkimiä, joiden asento voidaan asettaa. Siten piirin peruselementeistä voidaan muodostaa lukematon määrä erilaisia toiminnallisia kokonaisuuksia. FPGA-piirejä on pitkään käytetty kommunikointialan tuotteissa ja siksi niiden kehitys on viime vuosina ollut nopeaa. Samalla hinnat ovat pudonneet. Tästä johtuen FPGA-piiristä on tullut kiinnostava vaihtoehto myös tehoelektroniikkalaitteiden ohjaukseen. Väitöstyössä FPGA-piirien käytön soveltuvuutta on tutkittu käyttäen kahta vaativaa ja erilaista käytännön tehoelektroniikkalaitetta: taajuudenmuuttajaa ja hitsauskonetta. Molempiin testikohteisiin rakennettiin alan suomalaisten teollisuusyritysten kanssa soveltuvat prototyypit,joiden ohjauselektroniikka muutettiin FPGA-pohjaiseksi. Lisäksi kehitettiin tätä uutta tekniikkaa hyödyntävät uudentyyppiset ohjausmenetelmät. Prototyyppien toimivuutta verrattiin vastaaviin perinteisillä menetelmillä ohjattuihin kaupallisiin tuotteisiin ja havaittiin FPGA-piirien mahdollistaman rinnakkaisen laskennantuomat edut molempien tehoelektroniikkalaitteiden toimivuudessa. Työssä on myösesitetty uusia menetelmiä ja työkaluja FPGA-pohjaisen säätöjärjestelmän kehitykseen ja testaukseen. Esitetyillä menetelmillä tuotteiden kehitys saadaan mahdollisimman nopeaksi ja tehokkaaksi. Lisäksi työssä on kehitetty FPGA:n sisäinen ohjaus- ja kommunikointiväylärakenne, joka palvelee tehoelektroniikkalaitteiden ohjaussovelluksia. Uusi kommunikointirakenne edistää lisäksi jo tehtyjen osajärjestelmien uudelleen käytettävyyttä tulevissa sovelluksissa ja tuotesukupolvissa.
Resumo:
Diplomityössä tutkittiin kaupallisen monikappaledynamiikkaohjelmiston soveltuvuutta kiinnirullaimen dynamiikan ja värähtelyjen tutkimiseen. Erityisen kiinnostuneita oltiin nipin kuvauksesta sekä nipissä tapahtuvista värähtelyistä. Tässä diplomityössä mallinnettiin kiinnirullaimen ensiö- ja toisiokäytöt sekä tampuuritela. Malli yhdistettiin myöhemmin Metso Paper Järvenpäässä rinnakkaisena diplomityönä tehtyyn malliin, joista muodostui kahteen ratkaisijaan perustuva simulointimalli. Simulointimalli rakennettiin käyttämään kahta erillistä ratkaisijaa, joista toinen on mekaniikkamallin rakentamisessa käytetty ADAMS-ohjelmisto ja toinen säätöjärjestelmää ja hydraulipiirejä kuvaava Simulink-malli. Nipin mallintamiseksi tampuuritela ja rullaussylinteri mallinnettiin joustaviksi käyttäen keskitettyjen massojen menetelmää. Siirtolaitteissa sekä runkorakenteissa tapahtuvat joustot kuvattiin yhden vapausasteen jousi-vaimennin voimilla kuvattuina järjestelminä. Tässä diplomityössä on myös keskitytty esittelemään ADAMS-ohjelmiston toimintaa ohjeistavasti sekä käsittelemään parametrisen mallintamisen etuja. Työssä havaittiin monikappaledynamiikan soveltuvuus kiinnirullaimen dynamiikan sekä dynaamisten voimien aiheuttamien värähtelyjen tutkimiseen. Suoritetuista värähtelymittauksista voitiin tehdä vain arvioita. Mallin havaittiin vaativan lisätutkimusta ja kehitystyötä
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Members of the bacterial genus Streptomyces are well known for their ability to produce an exceptionally wide selection of diverse secondary metabolites. These include natural bioactive chemical compounds which have potential applications in medicine, agriculture and other fields of commerce. The outstanding biosynthetic capacity derives from the characteristic genetic flexibility of Streptomyces secondary metabolism pathways: i) Clustering of the biosynthetic genes in chromosome regions redundant for vital primary functions, and ii) the presence of numerous genetic elements within these regions which facilitate DNA rearrangement and transfer between non-progeny species. Decades of intensive genetic research on the organization and function of the biosynthetic routes has led to a variety of molecular biology applications, which can be used to expand the diversity of compounds synthesized. These include techniques which, for example, allow modification and artificial construction of novel pathways, and enable gene-level detection of silent secondary metabolite clusters. Over the years the research has expanded to cover molecular-level analysis of the enzymes responsible for the individual catalytic reactions. In vitro studies of the enzymes provide a detailed insight into their catalytic functions, mechanisms, substrate specificities, interactions and stereochemical determinants. These are factors that are essential for the thorough understanding and rational design of novel biosynthetic routes. The current study is a part of a more extensive research project (Antibiotic Biosynthetic Enzymes; www.sci.utu.fi/projects/biokemia/abe), which focuses on the post-PKS tailoring enzymes involved in various type II aromatic polyketide biosynthetic pathways in Streptomyces bacteria. The initiative here was to investigate specific catalytic steps in anthracycline and angucycline biosynthesis through in vitro biochemical enzyme characterization and structural enzymology. The objectives were to elucidate detailed mechanisms and enzyme-level interactions which cannot be resolved by in vivo genetic studies alone. The first part of the experimental work concerns the homologous polyketide cyclases SnoaL and AknH. These catalyze the closure of the last carbon ring of the tetracyclic carbon frame common to all anthracycline-type compounds. The second part of the study primarily deals with tailoring enzymes PgaE (and its homolog CabE) and PgaM, which are responsible for a cascade of sequential modification reactions in angucycline biosynthesis. The results complemented earlier in vivo findings and confirmed the enzyme functions in vitro. Importantly, we were able to identify the amino acid -level determinants that influence AknH and SnoaL stereoselectivity and to determine the complex biosynthetic steps of the angucycline oxygenation cascade of PgaE and PgaM. In addition, the findings revealed interesting cases of enzyme-level adaptation, as some of the catalytic mechanisms did not coincide with those described for characterised homologs or enzymes of known function. Specifically, SnoaL and AknH were shown to employ a novel acid-base mechanism for aldol condenzation, whereas the hydroxylation reaction catalysed by PgaM involved unexpected oxygen chemistry. Owing to a gene-level fusion of two ancestral reading frames, PgaM was also shown to adopt an unusual quaternary sturucture, a non-covalent fusion complex of two alternative forms of the protein. Furthermore, the work highlighted some common themes encountered in polyketide biosynthetic pathways such as enzyme substrate specificity and intermediate reactivity. These are discussed in the final chapters of the work.
Resumo:
Työssä tutkitaan PI-säätimen käyttöä dynaamisessa kireydensäädössä ilman varsinaista kireyden takaisinkytkentää. Kireyttä säädetään epäsuorasti käyttämällä takaisinkytkentätietona kahden telan välistä paikkaeroa. Kireyssäädin toteutetaan nopeussäätimen rinnalle. Rinnakkaisrakenteella pyritään kireyden muutoksiin nopeasti reagoivaan säätöratkaisuun. Rakenne toteutetaan osaksi taajuusmuuttajan säätöketjua. Työssä esitetään telasysteemin simulointimalli, jonka toimivuus varmistetaan käytännön mittauksin. Lisäksi työssä arvioidaan kireyssäädön toimintaa dynaamisessa kireydensäädössä simulointien ja testilaitteistolla suoritettavien mittausten perusteella.
Resumo:
Lipotoxicity is a condition in which fatty acids (FAs) are not efficiently stored in adipose tissue and overflow to non-adipose tissue, causing organ damages. A defect of adipose tissue FA storage capability can be the primary culprit in the insulin resistance condition that characterizes many of the severe metabolic diseases that affect people nowadays. Obesity, in this regard, constitutes the gateway and risk factor of the major killers of modern society, such as cardiovascular disease and cancer. A deep understanding of the pathogenetic mechanisms that underlie obesity and the insulin resistance syndrome is a challenge for modern medicine. In the last twenty years of scientific research, FA metabolism and dysregulations have been the object of numerous studies. Development of more targeted and quantitative methodologies is required on one hand, to investigate and dissect organ metabolism, on the other hand to test the efficacy and mechanisms of action of novel drugs. The combination of functional and anatomical imaging is an answer to this need, since it provides more understanding and more information than we have ever had. The first purpose of this study was to investigate abnormalities of substrate organ metabolism, with special reference to the FA metabolism in obese drug-naïve subjects at an early stage of disease. Secondly, trimetazidine (TMZ), a metabolic drug supposed to inhibit FA oxidation (FAO), has been for the first time evaluated in obese subjects to test a whole body and organ metabolism improvement based on the hypothesis that FAO is increased at an early stage of the disease. A third objective was to investigate the relationship between ectopic fat accumulation surrounding heart and coronaries, and impaired myocardial perfusion in patients with risk of coronary artery disease (CAD). In the current study a new methodology has been developed with PET imaging with 11C-palmitate and compartmental modelling for the non-invasive in vivo study of liver FA metabolism, and a similar approach has been used to study FA metabolism in the skeletal muscle, the adipose tissue and the heart. The results of the different substudies point in the same direction. Obesity, at the an early stage, is associated with an impairment in the esterification of FAs in adipose tissue and skeletal muscle, which is accompanied by the upregulation in skeletal muscle, liver and heart FAO. The inability to store fat may initiate a cascade of events leading to FA oversupply to lean tissue, overload of the oxidative pathway, and accumulation of toxic lipid species and triglycerides, and it was paralleled by a proportional growth in insulin resistance. In subjects with CAD, the accumulation of ectopic fat inside the pericardium is associated with impaired myocardial perfusion, presumably via a paracrine/vasocrine effect. At the beginning of the disease, TMZ is not detrimental to health; on the contrary at the single organ level (heart, skeletal muscle and liver) it seems beneficial, while no relevant effects were found on adipose tissue function. Taken altogether these findings suggest that adipose tissue storage capability should be preserved, if it is not possible to prevent excessive fat intake in the first place.
Resumo:
Stressignaler avkänns många gånger av membranbundna proteiner som översätter signalerna till kemisk modifiering av molekyler, ofta proteinkinaser Dessa kinaser överför de avkodade budskapen till specifika transkriptionsfaktorer genom en kaskad av sekventiella fosforyleringshändelser, transkriptionsfaktorerna aktiverar i sin tur de gener som behövs för att reagera på stressen. En av de mest kända måltavlorna för stressignaler är transkriptionsfaktor AP-1 familjemedlemen c-Jun. I denna studie har jag identifierat den nukleolära proteinet AATF som en ny regulator av c-Jun-medierad transkriptionsaktivitet. Jag visar att stresstimuli inducerar omlokalisering av AATF vilket i sin tur leder till aktivering av c-Jun. Den AATF-medierad ökningen av c-Jun-aktiviteten leder till en betydande ökning av programmerad celldöd. Parallellt har jag vidarekarakteriserat Cdk5/p35 signaleringskomplexet som tidigare har identifierats i vårt laboratorium som en viktig faktor för myoblastdifferentiering. Jag identifierade den atypiska PKCξ som en uppströms regulator av Cdk5/p35-komplexet och visar att klyvning och aktivering av Cdk5 regulatorn p35 är av fysiologisk betydelse för differentieringsprocessen och beroende av PKCξ aktivitet. Jag visar att vid induktion av differentiering fosforylerar PKCξ p35 vilket leder till calpain-medierad klyvning av p35 och därmed ökning av Cdk5-aktiviteten. Denna avhandling ökar förståelsen för de regulatoriska mekanismer som styr c-Jun-transkriptionsaktiviteten och c-Jun beroende apoptos genom att identifiera AATF som en viktig faktor. Dessutom ger detta arbete nya insikter om funktionen av Cdk5/p35-komplexet under myoblastdifferentiering och identifierar PKCξ som en uppströms regulator av Cdk5 aktivitet och myoblast differentiering.
Resumo:
Räjähdysvaaralliset tilat aiheuttavat lisävaatimuksia instrumentointisuunnittelulle. Tämän diplomityön tavoitteena on kartoittaa räjähdysvaarallisten tilojen standardien vaatimuksia instrumentoinnin suunnitteluun ja laitevalintoihin. Tarkemmin tarkastellaan luonnostaan vaaratonta räjähdyssuojausrakennetta, joita instrumentoinnin laitteilla tyypillisesti käytetään. Luonnostaan vaarattomassa räjähdyssuojausrakenteessa sähkölaitteen energiaa ja pintalämpötilaa rajoitetaan niin, ettei laite voi sytyttää räjähdysvaarallista seosta. Työssä perehdytään myös luonnostaan vaarattomia laitteita sisältäviin instrumentointipiireihin ja standardin niille asettamiin vaatimuksiin. Tällaisille piireille on tehtävä varmennustarkastelu, jossa todennetaan piirin täyttävän vaatimukset. Varmennusta varten diplomityön käytännönosuutena kehitetään varmennussovellus, joka suorittaa varmennuksen tietokannassa olevista arvoista.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Äänitetty: 7.-8.5.1953, Los Angeles.