13 resultados para two sector model
em Helda - Digital Repository of University of Helsinki
Resumo:
The educational reform, launched in Finland in 2008, concerns the implementation of the Special Education Strategy (Opetusministeriö 2007) under an improvement initiative called Kelpo. One of the main proposed alterations of the Strategy relates to the support system of comprehensive school pupils. The existed two-level model (general and special support) is to be altered by the new three-level model (general, intensified and special support). There are 233 municipalities involved nationwide in the Kelpo initiative, each of which has a municipal coordinator as a national delegate. The Centre for Educational Assessment [the Centre] at the University of Helsinki, led by Professor Jarkko Hautamäki, carries out the developmental assessment of the initiative’s developmental process. As a part of that assessment the Centre interviewed 151 municipal coordinators in November 2008. This thesis considers the Kelpo initiative from Michael Fullan’s change theory’s aspect. The aim is to identify the change theoretical factors in the speech of the municipal coordinators interviewed by the Centre, and to constitute a view of what the crucial factors in the reform implementation process are. The appearance of the change theoretical factors, in the coordinators’ speech, and the meaning of these appearances are being considered from the change process point of view. The Centre collected the data by interviewing the municipal coordinators (n=151) in small groups of 4-11 people. The interview method was based on Vesala and Rantanen’s (2007) qualitative attitude survey method which was adapted and evolved for the Centre’s developmental assessment by Hilasvuori. The method of the analysis was a qualitative theory-based content analysis, processed using the Atlas.ti software. The theoretical frame of reference was grounded on Fullan’s change theory and the analysis was based on three change theoretical categories: implementation, cooperation and perspectives in the change process. The analysis of the interview data revealed spoken expressions in the coordinators’ speech which were either positively or negatively related to the theoretical categories. On the grounds of these change theoretical relations the existence of the change process was observed. The crucial factors of reform implementation were found, and the conclusion is that the encounter of the new reform-based and already existing strategies in school produces interface challenges. These challenges are particularly confronted in the context of the implementation of the new three-level support model. The interface challenges are classified as follows: conceptual, method-based, action-based and belief-based challenges. Keywords: reform, implementation, change process, Michael Fullan, Kelpo, intensified support, special support
Resumo:
Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.
Resumo:
The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image coordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface
Resumo:
What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.
Resumo:
This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.
Resumo:
Veri-aivoeste suojelee aivoja verenkierron vierasaineilta. Veri-aivoestettä tutkivia in vivo ja in vitro -menetelmiä on raportoitu laajasti kirjallisuudessa. Yhdisteiden farmakokinetiikka aivoissa kuvaavia tietokonemalleja on esitetty vain muutamia. Tässä tutkimuksessa kerättiin kirjallisuudesta aineisto eri in vitro ja in vivo -menetelmillä määritetyistä veri-aivoesteen permeabiliteettikertoimista. Lisäksi tutkimuksessa rakennettiin kaksi veri-aivoesteen farmakokineettista tietokonemallia, mikrodialyysimalli ja efluksimalli. Mikrodialyysimalli on yksinkertainen kahdesta tilasta (verenkierto ja aivot) koostuva farmakokineettinen malli. Mikrodialyysimallilla simuloitiin in vivo määritettyjen parametrien perusteella viiden yhdisteen pitoisuuksia rotan aivoissa ja verenkierrossa. Mallilla ei saatu täsmällisesti in vivo -tilannetta vastaavia pitoisuuskuvaajia johtuen mallin rakenteessa tehdyistä yksinkertaistuksista, kuten aivokudostilan ja kuljetinproteiinien kinetiikan puuttuminen. Efluksimallissa on kolme tilaa, verenkierto, veri-aivoesteen endoteelisolutila ja aivot. Efluksimallilla tutkittiin teoreettisten simulaatioiden avulla veri-aivoesteen luminaalisella membraanilla sijaitsevan aktiivisen efluksiproteiinin ja passiivisen permeaation merkitystä yhdisteen pitoisuuksiin aivojen solunulkoisessa nesteessä. Tutkittava parametri oli vapaan yhdisteen pitoisuuksien suhde aivojen ja verenkierron välillä vakaassa tilassa (Kp,uu). Tuloksissa havaittiin efluksiproteiinin vaikutus pitoisuuksiin Michaelis-Mentenin kinetiikan mukaisesti. Efluksimalli sopii hyvin teoreettisten simulaatioiden tekemiseen. Malliin voidaan lisätä aktiivisia kuljettimia. Teoreettisten simulaatioiden avulla voidaan yhdistää in vitro ja in vivo tutkimuksien tuloksia ja osatekijöitä voidaan tutkia yhdessä simulaatiossa.
Resumo:
In this study, it is argued that the view on alliance creation presented in the current academic literature is limited, and that using a learning approach helps to explain the dynamic nature of alliance creation. The cases in this study suggest that a wealth of inefficiency elements can be found in alliance creation. These elements can further be divided into categories, which help explain the dynamics of alliance creation. The categories –combined with two models brought forward by the study– suggest that inefficiency can be avoided through learning during the creation process. Some elements are especially central to this argumentation. First, the elements related to the clarity and acceptance of the strategy of the company, the potential lack of an alliance strategy and the elements related to changes in the strategic context. Second, the elements related to the length of the alliance creation processes and the problems a long process entails. It is further suggested that the different inefficiency elements may create a situation, where the alliance creation process is –sequentially and successfully– followed to the end, but where the different inefficiencies create a situation where the results are not aligned with the strategic intent. The proposed solution is to monitor and assess the risk for inefficiency elements during the alliance creation process. The learning, which occurs during the alliance creation process as a result of the monitoring, can then lead to realignments in the process. This study proposes a model to mitigate the risk related to the inefficiencies. The model emphasizes creating an understanding of the other alliance partner’s business, creating a shared vision, using pilot cooperation and building trust within the process. An analytical approach to assessing the benefits of trust is also central in this view. The alliance creation approach suggested by this study, which emphasizes trust and pilot cooperation, is further critically reviewed against contracting as a way to create alliances.
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
The open development model of software production has been characterized as the future model of knowledge production and distributed work. Open development model refers to publicly available source code ensured by an open source license, and the extensive and varied distributed participation of volunteers enabled by the Internet. Contemporary spokesmen of open source communities and academics view open source development as a new form of volunteer work activity characterized by hacker ethic and bazaar governance . The development of the Linux operating system is perhaps the best know example of such an open source project. It started as an effort by a user-developer and grew quickly into a large project with hundreds of user-developer as contributors. However, in hybrids , in which firms participate in open source projects oriented towards end-users, it seems that most users do not write code. The OpenOffice.org project, initiated by Sun Microsystems, in this study represents such a project. In addition, the Finnish public sector ICT decision-making concerning open source use is studied. The purpose is to explore the assumptions, theories and myths related to the open development model by analysing the discursive construction of the OpenOffice.org community: its developers, users and management. The qualitative study aims at shedding light on the dynamics and challenges of community construction and maintenance, and related power relations in hybrid open source, by asking two main research questions: How is the structure and membership constellation of the community, specifically the relation between developers and users linguistically constructed in hybrid open development? What characterizes Internet-mediated virtual communities and how can they be defined? How do they differ from hierarchical forms of knowledge production on one hand and from traditional volunteer communities on the other? The study utilizes sociological, psychological and anthropological concepts of community for understanding the connection between the real and the imaginary in so-called virtual open source communities. Intermediary methodological and analytical concepts are borrowed from discourse and rhetorical theories. A discursive-rhetorical approach is offered as a methodological toolkit for studying texts and writing in Internet communities. The empirical chapters approach the problem of community and its membership from four complementary points of views. The data comprises mailing list discussion, personal interviews, web page writings, email exchanges, field notes and other historical documents. The four viewpoints are: 1) the community as conceived by volunteers 2) the individual contributor s attachment to the project 3) public sector organizations as users of open source 4) the community as articulated by the community manager. I arrive at four conclusions concerning my empirical studies (1-4) and two general conclusions (5-6). 1) Sun Microsystems and OpenOffice.org Groupware volunteers failed in developing necessary and sufficient open code and open dialogue to ensure collaboration thus splitting the Groupware community into volunteers we and the firm them . 2) Instead of separating intrinsic and extrinsic motivations, I find that volunteers unique patterns of motivations are tied to changing objects and personal histories prior and during participation in the OpenOffice.org Lingucomponent project. Rather than seeing volunteers as a unified community, they can be better understood as independent entrepreneurs in search of a collaborative community . The boundaries between work and hobby are blurred and shifting, thus questioning the usefulness of the concept of volunteer . 3) The public sector ICT discourse portrays a dilemma and tension between the freedom to choose, use and develop one s desktop in the spirit of open source on one hand and the striving for better desktop control and maintenance by IT staff and user advocates, on the other. The link between the global OpenOffice.org community and the local end-user practices are weak and mediated by the problematic IT staff-(end)user relationship. 4) Authoring community can be seen as a new hybrid open source community-type of managerial practice. The ambiguous concept of community is a powerful strategic tool for orienting towards multiple real and imaginary audiences as evidenced in the global membership rhetoric. 5) The changing and contradictory discourses of this study show a change in the conceptual system and developer-user relationship of the open development model. This change is characterized as a movement from hacker ethic and bazaar governance to more professionally and strategically regulated community. 6) Community is simultaneously real and imagined, and can be characterized as a runaway community . Discursive-action can be seen as a specific type of online open source engagement. Hierarchies and structures are created through discursive acts. Key words: Open Source Software, open development model, community, motivation, discourse, rhetoric, developer, user, end-user