38 resultados para A-level Mathematics
Resumo:
This research examines three aspects of becoming a teacher, teacher identity formation in mathematics teacher education: the cognitive and affective aspect, the image of an ideal teacher directing the developmental process, and as an on-going process. The formation of emerging teacher identity was approached in a social psychological framework, in which individual development takes place in social interaction with the context through various experiences. Formation of teacher identity is seen as a dynamic, on-going developmental process, in which an individual intentionally aspires after the ideal image of being a teacher by developing his/her own competence as a teacher. The starting-point was that it is possible to examine formation of teacher identity through conceptualisation of observations that the individual and others have about teacher identity in different situations. The research uses the qualitative case study approach to formation of emerging teacher identity, the individual developmental process and the socially constructed image of an ideal mathematics teacher. Two student cases, John and Mary, and the collective case of teacher educators representing socially shared views of becoming and being a mathematics teacher are presented. The development of each student was examined based on three semi-structured interviews supplemented with written products. The data-gathering took place during the 2005 2006 academic year. The collective case about the ideal image provided during the programme was composed of separate case displays of each teacher educator, which were mainly based on semi-structured interviews in spring term 2006. The intentions and aims set for students were of special interest in the interviews with teacher educators. The interview data was analysed following the modified idea of analytic induction. The formation of teacher identity is elaborated through three themes emerging from theoretical considerations and the cases. First, the profile of one s present state as a teacher may be scrutinised through separate affective and cognitive aspects associated with the teaching profession. The differences between individuals arise through dif-ferent emphasis on these aspects. Similarly, the socially constructed image of an ideal teacher may be profiled through a combination of aspects associated with the teaching profession. Second, the ideal image directing the individual developmental process is the level at which individual and social processes meet. Third, formation of teacher identity is about becoming a teacher both in the eyes of the individual self as well as of others in the context. It is a challenge in academic mathematics teacher education to support the various cognitive and affective aspects associated with being a teacher in a way that being a professional and further development could have a coherent starting-point that an individual can internalise.
Resumo:
The influence of the architecture of the Byzantine capital spread to the Mediterranean provinces with travelling masters and architects. In this study the architecture of the Constantinopolitan School has been detected on the basis of the typology of churches, completed by certain morphological aspects when necessary. The impact of the Constantinopolitan workshops appears to have been more important than previously realized. This research revealed that the Constantinopolitan composite domed inscribed-cross type or cross-in-square spread everywhere to the Balkans and it was assumed soon by the local schools of architecture. In addition, two novel variants were invented on the basis of this model: the semi-composite type and the so-called Athonite type. In the latter variant lateral conches, choroi, were added for liturgical reasons. Instead, the origin of the domed ambulatory church was partly provincial. One result of this study is that the origin of the Middle Byzantine domed octagonal types was traced to Constantinople. This is attested on the basis of the archaeological evidence. Also some other architectural elements that have not been preserved in the destroyed capital have survived at the provincial level: the domed hexagonal type, the multi-domed superstructure, the pseudo-octagon and the narthex known as the lite. The Constantinopolitan architecture during the period in question was based on the Early Christian and Late Antique forms, practices and innovations and this also emerges at the provincial level.
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
The aim of this dissertation was to explore how different types of prior knowledge influence student achievement and how different assessment methods influence the observed effect of prior knowledge. The project started by creating a model of prior knowledge which was tested in various science disciplines. Study I explored the contribution of different components of prior knowledge on student achievement in two different mathematics courses. The results showed that the procedural knowledge components which require higher-order cognitive skills predicted the final grades best and were also highly related to previous study success. The same pattern regarding the influence of prior knowledge was also seen in Study III which was a longitudinal study of the accumulation of prior knowledge in the context of pharmacy. The study analysed how prior knowledge from previous courses was related to student achievement in the target course. The results implied that students who possessed higher-level prior knowledge, that is, procedural knowledge, from previous courses also obtained higher grades in the more advanced target course. Study IV explored the impact of different types of prior knowledge on students’ readiness to drop out from the course, on the pace of completing the course and on the final grade. The study was conducted in the context of chemistry. The results revealed again that students who performed well in the procedural prior-knowledge tasks were also likely to complete the course in pre-scheduled time and get higher final grades. On the other hand, students whose performance was weak in the procedural prior-knowledge tasks were more likely to drop out or take a longer time to complete the course. Study II explored the issue of prior knowledge from another perspective. Study II aimed to analyse the interrelations between academic self-beliefs, prior knowledge and student achievement in the context of mathematics. The results revealed that prior knowledge was more predictive of student achievement than were other variables included in the study. Self-beliefs were also strongly related to student achievement, but the predictive power of prior knowledge overruled the influence of self-beliefs when they were included in the same model. There was also a strong correlation between academic self-beliefs and prior-knowledge performance. The results of all the four studies were consistent with each other indicating that the model of prior knowledge may be used as a potential tool for prior knowledge assessment. It is useful to make a distinction between different types of prior knowledge in assessment since the type of prior knowledge students possess appears to make a difference. The results implied that there indeed is variation between students’ prior knowledge and academic self-beliefs which influences student achievement. This should be taken into account in instruction.
Resumo:
The purpose of this research was to examine teacher’s pedagogical thinking based on beliefs. It aimed to investigate and identify beliefs from teachers’ speech when they were reflecting their own teaching. Placement of beliefs in levels of pedagogical thinking was also examined. The second starting point for a study was the Instrumental Enrichment -intervention, which aims to enhance learning potential and cognitive functioning of students. The goal of this research was to investigate how five main principles of the intervention come forward in teachers’ thinking. Specifying research question was: how similar teachers’ beliefs are to the main principles of intervention. The teacher-thinking paradigm provided the framework for this study. The essential concepts of this study are determined exactly in the theoretical framework. Model of pedagogical thinking was important in the examination of teachers’ thinking. Beliefs were approached through the referencing of varied different theories. Feuerstein theory of Structural cognitive modifiability and Mediated learning experience completed the theory of teacher thinking. The research material was gathered in two parts. In the first part two mathematics lessons of three class teachers were videotaped. In second part the teachers were interviewed by using a stimulated recall method. Interviews were recorded and analysed by qualitative content analysis. Teachers’ beliefs were divided in themes and contents of these themes were described. This part of analysis was inductive. Second part was deductive and it was based on theories of pedagogical thinking levels and Instrumental Enrichment -intervention. According to the research results, three subcategories of teachers’ beliefs were found: beliefs about learning, beliefs about teaching and beliefs about students. When the teachers discussed learning, they emphasized the importance of understanding. In teaching related beliefs student-centrality was highlighted. The teachers also brought out some demands for good education. They were: clarity, diversity and planning. Beliefs about students were divided into two groups. The teachers believed that there are learning differences between students and that students have improved over the years. Because most of the beliefs were close to practice and related to concrete classroom situation, they were situated in Action level of pedagogical thinking. Some teaching and learning related beliefs of individual teachers were situated in Object theory level. Metatheory level beliefs were not found. Occurrence of main principles of intervention differed between teachers. They were much more consistent and transparent in the beliefs of one teacher than of the other two teachers. Differences also occurred between principles. For example reciprocity came up in every teacher’s beliefs, but modifiability was only found in the beliefs of one teacher. Results of this research were consistent with other research made in the field. Teachers’ beliefs about teaching were individual. Even though shared themes were found, the teachers emphasized different aspects of their work. Occurrence of beliefs that were in accordance with the intervention were teacher-specific. Inconsistencies were also found within teachers and their individual beliefs.
Resumo:
This research is based on the problems in secondary school algebra I have noticed in my own work as a teacher of mathematics. Algebra does not touch the pupil, it remains knowledge that is not used or tested. Furthermore the performance level in algebra is quite low. This study presents a model for 7th grade algebra instruction in order to make algebra more natural and useful to students. I refer to the instruction model as the Idea-based Algebra (IDEAA). The basic ideas of this IDEAA model are 1) to combine children's own informal mathematics with scientific mathematics ("math math") and 2) to structure algebra content as a "map of big ideas", not as a traditional sequence of powers, polynomials, equations, and word problems. This research project is a kind of design process or design research. As such, this project has three, intertwined goals: research, design and pedagogical practice. I also assume three roles. As a researcher, I want to learn about learning and school algebra, its problems and possibilities. As a designer, I use research in the intervention to develop a shared artefact, the instruction model. In addition, I want to improve the practice through intervention and research. A design research like this is quite challenging. Its goals and means are intertwined and change in the research process. Theory emerges from the inquiry; it is not given a priori. The aim to improve instruction is normative, as one should take into account what "good" means in school algebra. An important part of my study is to work out these paradigmatic questions. The result of the study is threefold. The main result is the instruction model designed in the study. The second result is the theory that is developed of the teaching, learning and algebra. The third result is knowledge of the design process. The instruction model (IDEAA) is connected to four main features of good algebra education: 1) the situationality of learning, 2) learning as knowledge building, in which natural language and intuitive thinking work as "intermediaries", 3) the emergence and diversity of algebra, and 4) the development of high performance skills at any stage of instruction.
Resumo:
From Arithmetic to Algebra. Changes in the skills in comprehensive school over 20 years. In recent decades we have emphasized the understanding of calculation in mathematics teaching. Many studies have found that better understanding helps to apply skills in new conditions and that the ability to think on an abstract level increases the transfer to new contexts. In my research I take into consideration competence as a matrix where content is in a horizontal line and levels of thinking are in a vertical line. The know-how is intellectual and strategic flexibility and understanding. The resources and limitations of memory have their effects on learning in different ways in different phases. Therefore both flexible conceptual thinking and automatization must be considered in learning. The research questions that I examine are what kind of changes have occurred in mathematical skills in comprehensive school over the last 20 years and what kind of conceptual thinking is demonstrated by students in this decade. The study consists of two parts. The first part is a statistical analysis of the mathematical skills and their changes over the last 20 years in comprehensive school. In the test the pupils did not use calculators. The second part is a qualitative analysis of the conceptual thinking of pupils in comprehensive school in this decade. The study shows significant differences in algebra and in some parts of arithmetic. The largest differences were detected in the calculation skills of fractions. In the 1980s two out of three pupils were able to complete tasks with fractions, but in the 2000s only one out of three pupils were able to do the same tasks. Also remarkable is that out of the students who could complete the tasks with fractions, only one out of three pupils was on the conceptual level in his/her thinking. This means that about 10% of pupils are able to understand the algebraic expression, which has the same isomorphic structure as the arithmetical expression. This finding is important because the ability to think innovatively is created when learning the basic concepts. Keywords: arithmetic, algebra, competence
Resumo:
In this study the researcher wanted to show the observed connection of mathematics and textile work. To carry this out the researcher designed a textbook by herself for the upper secondary school in Tietoteollisuuden Naiset TiNA project at Helsinki University of Technology (URL:http://tina.tkk.fi/). The assignments were designed as additional teaching material to enhance and reinforce female students confidence in mathematics and in the management of their textile work. The research strategy applied action research, out of which two cycles two have been carried out. The first cycle consists of establishing the textbook and in the second cycle its usability is investigated. The third cycle is not included in this report. In the second cycle of the action research the data was collected from 15 teachers, five textile teachers, four mathematics teachers and six teachers of both subjects. They all got familiar with the textbook assignments and answered a questionnaire on the basis of their own teaching experience. The questionnaire was established by applying the theories of usability and teaching material assessment study. The data consisted of qualitative and quantitative information, which was analysed by content analysis with computer assisted table program to either qualitative or statistical description. According to the research results, the textbook assignments seamed to be applied better to mathematics lessons than textile work. The assignments pointed out, however, the clear interconnectedness of textile work and mathematics. Most of the assignments could be applied as such or as applications in the upper secondary school textile work and mathematics lessons. The textbook assignments were also applicable in different stages of the teaching process, e.g. as introduction, repetition or to support individual work or as group projects. In principle the textbook assignments were in well placed and designed in the correct level of difficulty. Negative findings concerned some too difficult assignments, lack of pupil motivation and unfamiliar form of task for the teacher. More clarity for some assignments was wished for and there was especially expressed a need for easy tasks and assignments in geometry. Assignments leading to the independent thinking of the pupil were additionally asked for. Two important improvements concerning the textbook attainability would be to get the assignments in html format over the Internet and to add a handicraft reference book.
Resumo:
Individual movement is very versatile and inevitable in ecology. In this thesis, I investigate two kinds of movement body condition dependent dispersal and small-range foraging movements resulting in quasi-local competition and their causes and consequences on the individual, population and metapopulation level. Body condition dependent dispersal is a widely evident but barely understood phenomenon. In nature, diverse relationships between body condition and dispersal are observed. I develop the first models that study the evolution of dispersal strategies that depend on individual body condition. In a patchy environment where patches differ in environmental conditions, individuals born in rich (e.g. nutritious) patches are on average stronger than their conspecifics that are born in poorer patches. Body condition (strength) determines competitive ability such that stronger individuals win competition with higher probability than weak individuals. Individuals compete for patches such that kin competition selects for dispersal. I determine the evolutionarily stable strategy (ESS) for different ecological scenarios. My models offer explanations for both dispersal of strong individuals and dispersal of weak individuals. Moreover, I find that within-family dispersal behaviour is not always reflected on the population level. This supports the fact that no consistent pattern is detected in data on body condition dependent dispersal. It also encourages the refining of empirical investigations. Quasi-local competition defines interactions between adjacent populations where one population negatively affects the growth of the other population. I model a metapopulation in a homogeneous environment where adults of different subpopulations compete for resources by spending part of their foraging time in the neighbouring patches, while their juveniles only feed on the resource in their natal patch. I show that spatial patterns (different population densities in the patches) are stable only if one age class depletes the resource very much but mainly the other age group depends on it.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.
Resumo:
This work develops methods to account for shoot structure in models of coniferous canopy radiative transfer. Shoot structure, as it varies along the light gradient inside canopy, affects the efficiency of light interception per unit needle area, foliage biomass, or foliage nitrogen. The clumping of needles in the shoot volume also causes a notable amount of multiple scattering of light within coniferous shoots. The effect of shoot structure on light interception is treated in the context of canopy level photosynthesis and resource use models, and the phenomenon of within-shoot multiple scattering in the context of physical canopy reflectance models for remote sensing purposes. Light interception. A method for estimating the amount of PAR (Photosynthetically Active Radiation) intercepted by a conifer shoot is presented. The method combines modelling of the directional distribution of radiation above canopy, fish-eye photographs taken at shoot locations to measure canopy gap fraction, and geometrical measurements of shoot orientation and structure. Data on light availability, shoot and needle structure and nitrogen content has been collected from canopies of Pacific silver fir (Abies amabilis (Dougl.) Forbes) and Norway spruce (Picea abies (L.) Karst.). Shoot structure acclimated to light gradient inside canopy so that more shaded shoots have better light interception efficiency. Light interception efficiency of shoots varied about two-fold per needle area, about four-fold per needle dry mass, and about five-fold per nitrogen content. Comparison of fertilized and control stands of Norway spruce indicated that light interception efficiency is not greatly affected by fertilization. Light scattering. Structure of coniferous shoots gives rise to multiple scattering of light between the needles of the shoot. Using geometric models of shoots, multiple scattering was studied by photon tracing simulations. Based on simulation results, the dependence of the scattering coefficient of shoot from the scattering coefficient of needles is shown to follow a simple one-parameter model. The single parameter, termed the recollision probability, describes the level of clumping of the needles in the shoot, is wavelength independent, and can be connected to previously used clumping indices. By using the recollision probability to correct for the within-shoot multiple scattering, canopy radiative transfer models which have used leaves as basic elements can use shoots as basic elements, and thus be applied for coniferous forests. Preliminary testing of this approach seems to explain, at least partially, why coniferous forests appear darker than broadleaved forests in satellite data.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.