36 resultados para inductive inference

em Helda - Digital Repository of University of Helsinki


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this research was to study how European churches contributed to the shaping of the Constitutional Treaty during the work of the Convention on the future of Europe through the public discussion forum, established by the Convention for this specific purpose in the years 2002 2003. In particular, this study sought to uncover the areas of interest brought up by the churches in their contributions, the objectives they pursued, and the approaches and arguments they employed to reach those objectives. The data for this study comprised all official submissions by European churches and church alliances to the Forum, totalling 21 contributions. A central criterion for inclusion of the data was that the organization can reasonably be assumed to represent the official position of one or more Christian churches within the European Union before the 2004 expansion. The contributing churches and organizations represent the vast majority of Christians in Europe. The data was analyzed using primarily qualitative content analysis. The research approach was a combination of abductive and inductive inference. Based on the analysis a two-fold theoretical framework was adopted, focusing on theories of public religion, secularization and deprivatization of religion, and of legitimation and collective identity. The main areas of interest found in the contributions of the churches were the value foundation of the European Union, which is demanded to coherently permeate all policies and actions of the EU, and the social dimension of Europe, which must be given equal status to the political and economic dimensions. In both areas the churches claim significant experience and expertise, which they want to see recognized in the Constituional Treaty through a formally guaranteed status for churches and religious communities in the EU. In their contributions the churches show a strong determination to secure a significant role for both religion and religious communities in the public life of Europe. As for the role of religion, they point out to its potential as a motivating and cohesive force in society and as a building block for a collective European identity, which is still missing. Churches also pursue a substantial public role for themselves beyond the spiritual dimension, permeating the secular areas of the social, political and economic dimensions. The arguments in suppport of such role are embedded in their interest and expertise in spiritual and other fundamental values and their broad involvement in providing social services. In this context churches use expressions inclusive of all religions and convictions, albeit clearly advocating the primacy of Europe's Christian heritage. Based on their historical role, their social involvement and their spiritual mission they use the public debate on the Constitutional Treaty to gain formal legitimacy for the public status of religion and religious communities, both nationally and on a European level, through appropriate provisions in the constitutional text. In return they offer the European Union ways of improving its own legitimacy by reducing the democratic and ideological deficit of the EU and advancing the development a collective European identity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constructive (intuitionist, anti-realist) semantics has thus far been lacking an adequate concept of truth in infinity concerning factual (i.e., empirical, non-mathematical) sentences. One consequence of this problem is the difficulty of incorporating inductive reasoning in constructive semantics. It is not possible to formulate a notion for probable truth in infinity if there is no adequate notion of what truth in infinity is. One needs a notion of a constructive possible world based on sensory experience. Moreover, a constructive probability measure must be defined over these constructively possible empirical worlds. This study defines a particular kind of approach to the concept of truth in infinity for Rudolf Carnap's inductive logic. The new approach is based on truth in the consecutive finite domains of individuals. This concept will be given a constructive interpretation. What can be verifiably said about an empirical statement with respect to this concept of truth, will be explained, for which purpose a constructive notion of epistemic probability will be introduced. The aim of this study is also to improve Carnap's inductive logic. The study addresses the problem of justifying the use of an "inductivist" method in Carnap's lambda-continuum. A correction rule for adjusting the inductive method itself in the course of obtaining evidence will be introduced. Together with the constructive interpretation of probability, the correction rule yields positive prior probabilities for universal generalizations in infinite domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study how probabilistic reasoning and inductive querying can be combined within ProbLog, a recent probabilistic extension of Prolog. ProbLog can be regarded as a database system that supports both probabilistic and inductive reasoning through a variety of querying mechanisms. After a short introduction to ProbLog, we provide a survey of the different types of inductive queries that ProbLog supports, and show how it can be applied to the mining of large biological networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the Aristotelian criterion referred to as 'abductio', Peirce suggests a method of hypothetical inference, which operates in a different way than the deductive and inductive methods. “Abduction is nothing but guessing” (Peirce, 7.219). This principle is of extreme value for the study of our understanding of mathematical self-similarity in both of its typical presentations: relative or absolute. For the first case, abduction incarnates the quantitative/qualitative relationships of a self-similar object or process; for the second case, abduction makes understandable the statistical treatment of self-similarity, 'guessing' the continuity of geometric features to the infinity through the use of a systematic stereotype (for instance, the assumption that the general shape of the Sierpiński triangle continuates identically into its particular shapes). The metaphor coined by Peirce, of an exact map containig itself the same exact map (a map of itself), is not only the most important precedent of Mandelbrot’s problem of measuring the boundaries of a continuous irregular surface with a logarithmic ruler, but also still being a useful abstraction for the conceptualisation of relative and absolute self-similarity, and its mechanisms of implementation. It is useful, also, for explaining some of the most basic geometric ontologies as mental constructions: in the notion of infinite convergence of points in the corners of a triangle, or the intuition for defining two parallel straight lines as two lines in a plane that 'never' intersect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to analyze and develop various forms of abduction as a means of conceptualizing processes of discovery. Abduction was originally presented by Charles S. Peirce (1839-1914) as a "weak", third main mode of inference -- besides deduction and induction -- one which, he proposed, is closely related to many kinds of cognitive processes, such as instincts, perception, practices and mediated activity in general. Both abduction and discovery are controversial issues in philosophy of science. It is often claimed that discovery cannot be a proper subject area for conceptual analysis and, accordingly, abduction cannot serve as a "logic of discovery". I argue, however, that abduction gives essential means for understanding processes of discovery although it cannot give rise to a manual or algorithm for making discoveries. In the first part of the study, I briefly present how the main trend in philosophy of science has, for a long time, been critical towards a systematic account of discovery. Various models have, however, been suggested. I outline a short history of abduction; first Peirce's evolving forms of his theory, and then later developments. Although abduction has not been a major area of research until quite recently, I review some critiques of it and look at the ways it has been analyzed, developed and used in various fields of research. Peirce's own writings and later developments, I argue, leave room for various subsequent interpretations of abduction. The second part of the study consists of six research articles. First I treat "classical" arguments against abduction as a logic of discovery. I show that by developing strategic aspects of abductive inference these arguments can be countered. Nowadays the term 'abduction' is often used as a synonym for the Inference to the Best Explanation (IBE) model. I argue, however, that it is useful to distinguish between IBE ("Harmanian abduction") and "Hansonian abduction"; the latter concentrating on analyzing processes of discovery. The distinctions between loveliness and likeliness, and between potential and actual explanations are more fruitful within Hansonian abduction. I clarify the nature of abduction by using Peirce's distinction between three areas of "semeiotic": grammar, critic, and methodeutic. Grammar (emphasizing "Firstnesses" and iconicity) and methodeutic (i.e., a processual approach) especially, give new means for understanding abduction. Peirce himself held a controversial view that new abductive ideas are products of an instinct and an inference at the same time. I maintain that it is beneficial to make a clear distinction between abductive inference and abductive instinct, on the basis of which both can be developed further. Besides these, I analyze abduction as a part of distributed cognition which emphasizes a long-term interaction with the material, social and cultural environment as a source for abductive ideas. This approach suggests a "trialogical" model in which inquirers are fundamentally connected both to other inquirers and to the objects of inquiry. As for the classical Meno paradox about discovery, I show that abduction provides more than one answer. As my main example of abductive methodology, I analyze the process of Ignaz Semmelweis' research on childbed fever. A central basis for abduction is the claim that discovery is not a sequence of events governed only by processes of chance. Abduction treats those processes which both constrain and instigate the search for new ideas; starting from the use of clues as a starting point for discovery, but continuing in considerations like elegance and 'loveliness'. The study then continues a Peircean-Hansonian research programme by developing abduction as a way of analyzing processes of discovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Workplace bullying is a topic of current interest in Finland. Workplace bullying is found in all professions, including the artistic ones. This thesis aims to explore workplace bullying from the view of the Finland-Swedish actors as a phenomenon that within dramatic art is difficult to define due to the fact that the body and emotions of an actor constitute his or her working tools. The research aims to deepen the understanding of the actors’ working situation, and particularly of the difficulties and problems actors face when exercising their job. The research problems are: What forms of bullying are the actors exposed to? Who is bullying? How is the bullying received by the actors, and what are the possible consequences? The theoretical orientation of this thesis is based upon dialogical philosophy where phenomenology, hermeneutics and dialog meet in an orientation where the unseen is emphasized and made visible. Artistic leadership should be based upon a pedagogic understanding that by an open and equal dialog with the Other recognizes human diversity. The narrative research was undertaken by using an interview guide for the interviews with eleven actors, six women and five men with the voice of a sixth man represented by an article. The interviews, each on average 118 minutes, were recorded and transcribed. The method of discursive analysis was initiated by numerous reflective readings based on analytic induction. The inductive part of the analysis consisted of mapping out the individual experiences of bullying where after the process of finding connecting common features in the extensive material took place. The coded data was then deductively grouped together according to the research problems, and subgroups were formed for deeper description. The research findings show that workplace bullying is an everyday occurrence within the field of dramatic art. Actors are bullied by theatre managers and directors as well as by colleagues and other personnel. The main areas of bullying is depreciation of one’s professional skills, the existing jargon, sexual harassment, collective bullying and bullying because of personal qualities. A significant finding concerning this problem was the existing culture of silence. Even if actually seeing and hearing a colleague being bullied, few stood up to defend the person being bullied because of fear of retaliation. Even the person actually being the object for bullying found it difficult to take any actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined the fundamental question of what really matters when selecting a new employee. The study focused on tacit knowledge used by personnel recruiters when interviewing employees. Knowledge was defined as the best view available, which helps one not to act haphazardly. Tacit knowledge was also defined as a positive concept, and it was seen as a part of personnel recruiters` improving proficiency. The research topic was chosen based on the observed increase in the amount of employment interviews and their importance in society. As recruiting is becoming a more distinct profession, it was reasonable to approach the topic from an educational point of view. The following research problems guided the examination of the phenomenon: 1) Where does the interviewer seek tacit knowledge from during the employment interview? 2) How is tacit knowledge achieved during the employment interview? 3) How does the interviewer defend the significance of the tacit knowledge gained as knowledge that has influence on the selection decision? The research data was collected by interviewing six personnel recruiters who conduct and evaluate employment interviews as part of their work responsibilities. The interview themes were linked to some recently made selection decision in each organization and the preceding employment interview with the selected candidate. In order to conceptualize tacit knowledge, reflective consideration of the interview event was used in the study. The lettered research data was analyzed inductively. As a result of the study, the objects of tacit knowledge in the context of an employment interview culminated into three areas: the applicant s verbal communication, the applicant s non-verbal communication and the interaction between interview participants. Observations directed toward those objects were shown to be intentional and three schemes were found behind them: experiences from previous interviews, applicant s application papers and the aptitude for the work responsibilities. The question of gaining knowledge was answered with the concept of procedural knowledge. Personnel recruiters were found to have four different, but interconnected ways to encounter knowledge during an employment interview: understanding, evaluative, revealing, and approving knowing. In order to explain the importance given to tacit knowledge, it was examined in connection with the most prevalent practices in the personnel selection industry. The significance of knowledge as the kind of knowledge that has an impact on the decision was supported by references to collective opinion (other people agree with it), circumstance (interview s short duration), or using some instrument (structured interview). The study revealed new aspects of employment selection process through examining tacit knowledge. The characteristics of the inductive analysis of the research data may also be utilized, when applicable, in tacit knowledge research within other contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research was to examine teacher’s pedagogical thinking based on beliefs. It aimed to investigate and identify beliefs from teachers’ speech when they were reflecting their own teaching. Placement of beliefs in levels of pedagogical thinking was also examined. The second starting point for a study was the Instrumental Enrichment -intervention, which aims to enhance learning potential and cognitive functioning of students. The goal of this research was to investigate how five main principles of the intervention come forward in teachers’ thinking. Specifying research question was: how similar teachers’ beliefs are to the main principles of intervention. The teacher-thinking paradigm provided the framework for this study. The essential concepts of this study are determined exactly in the theoretical framework. Model of pedagogical thinking was important in the examination of teachers’ thinking. Beliefs were approached through the referencing of varied different theories. Feuerstein theory of Structural cognitive modifiability and Mediated learning experience completed the theory of teacher thinking. The research material was gathered in two parts. In the first part two mathematics lessons of three class teachers were videotaped. In second part the teachers were interviewed by using a stimulated recall method. Interviews were recorded and analysed by qualitative content analysis. Teachers’ beliefs were divided in themes and contents of these themes were described. This part of analysis was inductive. Second part was deductive and it was based on theories of pedagogical thinking levels and Instrumental Enrichment -intervention. According to the research results, three subcategories of teachers’ beliefs were found: beliefs about learning, beliefs about teaching and beliefs about students. When the teachers discussed learning, they emphasized the importance of understanding. In teaching related beliefs student-centrality was highlighted. The teachers also brought out some demands for good education. They were: clarity, diversity and planning. Beliefs about students were divided into two groups. The teachers believed that there are learning differences between students and that students have improved over the years. Because most of the beliefs were close to practice and related to concrete classroom situation, they were situated in Action level of pedagogical thinking. Some teaching and learning related beliefs of individual teachers were situated in Object theory level. Metatheory level beliefs were not found. Occurrence of main principles of intervention differed between teachers. They were much more consistent and transparent in the beliefs of one teacher than of the other two teachers. Differences also occurred between principles. For example reciprocity came up in every teacher’s beliefs, but modifiability was only found in the beliefs of one teacher. Results of this research were consistent with other research made in the field. Teachers’ beliefs about teaching were individual. Even though shared themes were found, the teachers emphasized different aspects of their work. Occurrence of beliefs that were in accordance with the intervention were teacher-specific. Inconsistencies were also found within teachers and their individual beliefs.