7 resultados para Jacobian arithmetic
em Helda - Digital Repository of University of Helsinki
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
Objectives. The sentence span task is a complex working memory span task used for estimating total working memory capacity for both processing (sentence comprehension) and storage (remembering a set of words). Several traditional models of working memory suggest that performance on these tasks relies on phonological short-term storage. However, long-term memory effects as well as the effects of expertise and strategies have challenged this view. This study uses a working memory task that aids the creation of retrieval structures in the form of stories, which have been shown to form integrated structures in longterm memory. The research question is whether sentence and story contexts boost memory performance in a complex working memory task. The hypothesis is that storage of the words in the task takes place in long-term memory. Evidence of this would be better recall for words as parts of sentences than for separate words, and, particularly, a beneficial effect for words as part of an organized story. Methods. Twenty stories consisting of five sentences each were constructed, and the stimuli in all experimental conditions were based on these sentences and sentence-final words, reordered and recombined for the other conditions. Participants read aloud sets of five sentences that either formed a story or not. In one condition they had to report all the last words at the end of the set, in another, they memorised an additional separate word with each sentence. The sentences were presented on the screen one word at a time (500 ms). After the presentation of each sentence, the participant verified a statement about the sentence. After five sentences, the participant repeated back the words in correct positions. Experiment 1 (n=16) used immediate recall, experiment 2 (n=21) both immediate recall and recall after a distraction interval (the operation span task). In experiment 2 a distracting mental arithmetic task was presented instead of recall in half of the trials, and an individual word was added before each sentence in the two experimental conditions when the participants were to memorize the sentence final words. Subjects also performed a listening span task (in exp.1) or an operation span task (exp.2) to allow comparison of the estimated span and performance in the story task. Results were analysed using correlations, repeated measures ANOVA and a chi-square goodness of fit test on the distribution of errors. Results and discussion. Both the relatedness of the sentences (the story condition) and the inclusion of the words into sentences helped memory. An interaction showed that the story condition had a greater effect on last words than separate words. The beneficial effect of the story was shown in all serial positions. The effects remained in delayed recall. When the sentences formed stories, performance in verification of the statements about sentence context was better. This, as well as the differing distributions of errors in different experimental conditions, suggest different levels of representation are in use in the different conditions. In the story condition, the nature of these representations could be in the form of an organized memory structure, a situation model. The other working memory tasks had only few week correlations to the story task. This could indicate that different processes are in use in the tasks. The results do not support short-term phonological storage, but instead are compatible with the words being encoded to LTM during the task.
Resumo:
From Arithmetic to Algebra. Changes in the skills in comprehensive school over 20 years. In recent decades we have emphasized the understanding of calculation in mathematics teaching. Many studies have found that better understanding helps to apply skills in new conditions and that the ability to think on an abstract level increases the transfer to new contexts. In my research I take into consideration competence as a matrix where content is in a horizontal line and levels of thinking are in a vertical line. The know-how is intellectual and strategic flexibility and understanding. The resources and limitations of memory have their effects on learning in different ways in different phases. Therefore both flexible conceptual thinking and automatization must be considered in learning. The research questions that I examine are what kind of changes have occurred in mathematical skills in comprehensive school over the last 20 years and what kind of conceptual thinking is demonstrated by students in this decade. The study consists of two parts. The first part is a statistical analysis of the mathematical skills and their changes over the last 20 years in comprehensive school. In the test the pupils did not use calculators. The second part is a qualitative analysis of the conceptual thinking of pupils in comprehensive school in this decade. The study shows significant differences in algebra and in some parts of arithmetic. The largest differences were detected in the calculation skills of fractions. In the 1980s two out of three pupils were able to complete tasks with fractions, but in the 2000s only one out of three pupils were able to do the same tasks. Also remarkable is that out of the students who could complete the tasks with fractions, only one out of three pupils was on the conceptual level in his/her thinking. This means that about 10% of pupils are able to understand the algebraic expression, which has the same isomorphic structure as the arithmetical expression. This finding is important because the ability to think innovatively is created when learning the basic concepts. Keywords: arithmetic, algebra, competence
Resumo:
Nuclear magnetic resonance (NMR) spectroscopy provides us with many means to study biological macromolecules in solution. Proteins in particular are the most intriguing targets for NMR studies. Protein functions are usually ascribed to specific three-dimensional structures but more recently tails, long loops and non-structural polypeptides have also been shown to be biologically active. Examples include prions, -synuclein, amylin and the NEF HIV-protein. However, conformational preferences in coil-like molecules are difficult to study by traditional methods. Residual dipolar couplings (RDCs) have opened up new opportunities; however their analysis is not trivial. Here we show how to interpret RDCs from these weakly structured molecules. The most notable residual dipolar couplings arise from steric obstruction effects. In dilute liquid crystalline media as well as in anisotropic gels polypeptides encounter nematogens. The shape of a polypeptide conformation limits the encounter with the nematogen. The most elongated conformations may come closest whereas the most compact remain furthest away. As a result there is slightly more room in the solution for the extended than for the compact conformations. This conformation-dependent concentration effect leads to a bias in the measured data. The measured values are not arithmetic averages but essentially weighted averages over conformations. The overall effect can be calculated for random flight chains and simulated for more realistic molecular models. Earlier there was an implicit thought that weakly structured or non-structural molecules would not yield to any observable residual dipolar couplings. However, in the pioneering study by Shortle and Ackerman RDCs were clearly observed. We repeated the study for urea-denatured protein at high temperature and also observed indisputably RDCs. This was very convincing to us but we could not possibly accept the proposed reason for the non-zero RDCs, namely that there would be some residual structure left in the protein that to our understanding was fully denatured. We proceeded to gain understanding via simulations and elementary experiments. In measurements we used simple homopolymers with only two labelled residues and we simulated the data to learn more about the origin of RDCs. We realized that RDCs depend on the position of the residue as well as on the length of the polypeptide. Investigations resulted in a theoretical model for RDCs from coil-like molecules. Later we extended the studies by molecular dynamics. Somewhat surprisingly the effects are small for non-structured molecules whereas the bias may be large for a small compact protein. All in all the work gave clear and unambiguous results on how to interpret RDCs as structural and dynamic parameters of weakly structured proteins.
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
Resumo:
This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.
Resumo:
This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.