11 resultados para Two-Fluid Model
em Helda - Digital Repository of University of Helsinki
Resumo:
Veri-aivoeste suojelee aivoja verenkierron vierasaineilta. Veri-aivoestettä tutkivia in vivo ja in vitro -menetelmiä on raportoitu laajasti kirjallisuudessa. Yhdisteiden farmakokinetiikka aivoissa kuvaavia tietokonemalleja on esitetty vain muutamia. Tässä tutkimuksessa kerättiin kirjallisuudesta aineisto eri in vitro ja in vivo -menetelmillä määritetyistä veri-aivoesteen permeabiliteettikertoimista. Lisäksi tutkimuksessa rakennettiin kaksi veri-aivoesteen farmakokineettista tietokonemallia, mikrodialyysimalli ja efluksimalli. Mikrodialyysimalli on yksinkertainen kahdesta tilasta (verenkierto ja aivot) koostuva farmakokineettinen malli. Mikrodialyysimallilla simuloitiin in vivo määritettyjen parametrien perusteella viiden yhdisteen pitoisuuksia rotan aivoissa ja verenkierrossa. Mallilla ei saatu täsmällisesti in vivo -tilannetta vastaavia pitoisuuskuvaajia johtuen mallin rakenteessa tehdyistä yksinkertaistuksista, kuten aivokudostilan ja kuljetinproteiinien kinetiikan puuttuminen. Efluksimallissa on kolme tilaa, verenkierto, veri-aivoesteen endoteelisolutila ja aivot. Efluksimallilla tutkittiin teoreettisten simulaatioiden avulla veri-aivoesteen luminaalisella membraanilla sijaitsevan aktiivisen efluksiproteiinin ja passiivisen permeaation merkitystä yhdisteen pitoisuuksiin aivojen solunulkoisessa nesteessä. Tutkittava parametri oli vapaan yhdisteen pitoisuuksien suhde aivojen ja verenkierron välillä vakaassa tilassa (Kp,uu). Tuloksissa havaittiin efluksiproteiinin vaikutus pitoisuuksiin Michaelis-Mentenin kinetiikan mukaisesti. Efluksimalli sopii hyvin teoreettisten simulaatioiden tekemiseen. Malliin voidaan lisätä aktiivisia kuljettimia. Teoreettisten simulaatioiden avulla voidaan yhdistää in vitro ja in vivo tutkimuksien tuloksia ja osatekijöitä voidaan tutkia yhdessä simulaatiossa.
Resumo:
The educational reform, launched in Finland in 2008, concerns the implementation of the Special Education Strategy (Opetusministeriö 2007) under an improvement initiative called Kelpo. One of the main proposed alterations of the Strategy relates to the support system of comprehensive school pupils. The existed two-level model (general and special support) is to be altered by the new three-level model (general, intensified and special support). There are 233 municipalities involved nationwide in the Kelpo initiative, each of which has a municipal coordinator as a national delegate. The Centre for Educational Assessment [the Centre] at the University of Helsinki, led by Professor Jarkko Hautamäki, carries out the developmental assessment of the initiative’s developmental process. As a part of that assessment the Centre interviewed 151 municipal coordinators in November 2008. This thesis considers the Kelpo initiative from Michael Fullan’s change theory’s aspect. The aim is to identify the change theoretical factors in the speech of the municipal coordinators interviewed by the Centre, and to constitute a view of what the crucial factors in the reform implementation process are. The appearance of the change theoretical factors, in the coordinators’ speech, and the meaning of these appearances are being considered from the change process point of view. The Centre collected the data by interviewing the municipal coordinators (n=151) in small groups of 4-11 people. The interview method was based on Vesala and Rantanen’s (2007) qualitative attitude survey method which was adapted and evolved for the Centre’s developmental assessment by Hilasvuori. The method of the analysis was a qualitative theory-based content analysis, processed using the Atlas.ti software. The theoretical frame of reference was grounded on Fullan’s change theory and the analysis was based on three change theoretical categories: implementation, cooperation and perspectives in the change process. The analysis of the interview data revealed spoken expressions in the coordinators’ speech which were either positively or negatively related to the theoretical categories. On the grounds of these change theoretical relations the existence of the change process was observed. The crucial factors of reform implementation were found, and the conclusion is that the encounter of the new reform-based and already existing strategies in school produces interface challenges. These challenges are particularly confronted in the context of the implementation of the new three-level support model. The interface challenges are classified as follows: conceptual, method-based, action-based and belief-based challenges. Keywords: reform, implementation, change process, Michael Fullan, Kelpo, intensified support, special support
Resumo:
Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.
Resumo:
The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image coordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface
Resumo:
What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.
Resumo:
This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.
Resumo:
Powders are essential materials in the pharmaceutical industry, being involved in majority of all drug manufacturing. Powder flow and particle size are central particle properties addressed by means of particle engineering. The aim of the thesis was to gain knowledge on powder processing with restricted liquid addition, with a primary focus on particle coating and early granule growth. Furthermore, characterisation of this kind of processes was performed. A thin coating layer of hydroxypropyl methylcellulose was applied on individual particles of ibuprofen in a fluidised bed top-spray process. The polymeric coating improved the flow properties of the powder. The improvement was strongly related to relative humidity, which can be seen as an indicator of a change in surface hydrophilicity caused by the coating. The ibuprofen used in the present study had a d50 of 40 μm and thus belongs to the Geldart group C powders, which can be considered as challenging materials in top-spray coating processes. Ibuprofen was similarly coated using a novel ultrasound-assisted coating method. The results were in line with those obtained from powders coated in the fluidised bed process mentioned above. It was found that the ultrasound-assisted method was capable of coating single particles with a simple and robust setup. Granule growth in a fluidised bed process was inhibited by feeding the liquid in pulses. The results showed that the length of the pulsing cycles is of importance, and can be used to adjust granule growth. Moreover, pulsed liquid feed was found to be of greater significance to granule growth in high inlet air relative humidity. Liquid feed pulsing can thus be used as a tool in particle size targeting in fluidised bed processes and in compensating for changes in relative humidity of the inlet air. The nozzle function of a two-fluid external mixing pneumatic nozzle, typical for small scale pharmaceutical fluidised bed processes, was studied in situ in an ongoing fluidised bed process with particle tracking velocimetry. It was found that the liquid droplets undergo coalescence as they proceed away from the nozzle head. The coalescence was expected to increase droplet speed, which was confirmed in the study. The spray turbulence was studied, and the results showed turbulence caused by the event of atomisation and by the oppositely directed fluidising air. It was concluded that particle tracking velocimetry is a suitable tool for in situ spray characterisation. The light transmission through dense particulate systems was found to carry information on particle size and packing density as expected based on the theory of light scattering by solids. It was possible to differentiate binary blends consisting of components with differences in optical properties. Light transmission showed potential as a rapid, simple and inexpensive tool in characterisation of particulate systems giving information on changes in particle systems, which could be utilised in basic process diagnostics.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.