66 resultados para Conception of Philosophy
Resumo:
According to Meno s paradox we cannot inquire into what we do not know because we do not know what we are inquiring into. There are many ways to interpret the paradox but the central issue about our ability to reach truth is a profound one. In the dialogue Meno, Plato presents the paradox and an outline of a solution which enables us to reach knowledge (epistēmē) through philosophical discussion. During the last century Meno has often been considered transitional between Socratic thinking and Plato s own philosophy, and thus the dialogue has not been adequately interpreted as an integrated whole. Therefore the distinctive epistemology of the dialogue has not gained due notice. In this thesis the dialogue is analysed as an integrated whole and the philosophical interpretation also takes into account its dramatic features. The thesis emphasises the role of language and definitions in acquiring knowledge. Among the results concerning these subjects is a new interpretation of Socrates s defintion of shape (schēma). The theory of anamnēsis all learning is recollection in the Meno is argued to answer the paradox philosophically although Plato s presentation also contains playful and ironic elements. The background of the way Plato presents his case is that he appreciated the fact that no argument can plausibly demonstrate that argumentation is able to reach truth. In the Meno, Plato makes the earliest explicit distinction between knowledge and true belief in the history of Western philosophy. He also gives a definition of knowledge which is the basis of the so called classical definition of knowledge as justified true belief. In the Meno, true beliefs become knowledge when someone ties them down by reasoning about the explanation. The analysis of the epistemology of the dialogue from this perspective gives an interpretation which integrates the central concepts of the epistemology in the dialogue elenchos, anamnēsis and hypothetical inquiry into a unified whole which contains a plausible argument according to which the ignorant can reach knowledge through discussion. The conception that emerges by such an analysis is interesting both from the point of view of current interests and that of the history of philosophy. The method of knowledge acquisition in the Meno can, for example, be seen as a predecessor of modern scientific methods. The Meno is the earliest Greek mathematical text that has survived in its original form. The analysis presented in the thesis of the geometric passages in the dialogue provides new results both concerning Socrates s geometry lesson with the slave and the example presenting the hypothetical method. Concerning the latter, a new interpretation is presented. Keywords: anamnēsis, epistēmē, knowledge, Meno s paradox, Plato
Resumo:
This study discusses legal interpretation. The question is how legal texts, for instance laws, statutes and regulations, can and do have meaning. Language makes interpretation difficult as it holds no definite meanings. When the theoretical connection between semantics and legal meaning is loosened and we realise that language cannot be a means of justifying legal decisions, the responsibility inherent in legal interpretation can be seen in full. We are thus compelled to search for ways to analyse this responsibility. The main argument of the book is that the responsibility of legal interpretation contains a responsibility towards the text that is interpreted (and through the mediation of the text also towards the legal system), but not only this. It is not simply a responsibility to read and read well, but it transcends on a broader scale. It includes responsibility for the effects of the interpretation in a particular situation and with regard to the people whose case is decided. Ultimately, it is a responsibility to do justice. These two aspects of responsibility are conceptualised here as the two dimensions of the ethics of legal interpretation: the textual and the situational. The basic conception of language presented here is provided by Ludwig Wittgenstein s later philosophy, but the argument is not committed to only one philosophical tradition. Wittgenstein can be counterpointed in interesting ways by Jacques Derrida s ideas on language and meaning. Derrida s work also functions as a contrast to hermeneutic theories. It is argued that the seed to an answer to the question of meaning lies in the inter-personal and situated activity of interpretation and communication, an idea that can be discerned in different ways in the works of Wittgenstein, Derrida and Hans-Georg Gadamer. This way the question of meaning naturally leads us to think about ethics, which is approached here through the philosophy of Emmanuel Levinas. His thinking, focusing on topics such as otherness, friendship and hospitality, provides possibilities for answering some of the questions posed in this book. However, at the same time we move inside a normativity where ethics and politics come together in many ways. The responsibility of legal interpretation is connected to the political and this has to be acknowledged lest we forget that law always implies force. But it is argued here that the political can be explored in positive terms as it does not have to mean only power or violence.
Resumo:
This paper challenges the Kripkean interpretation of a posteriori necessities. It will be demonstrated, by an analysis of classic examples, that the modal content of supposed a posteriori necessities is more complicated than the Kripkean line suggests. We will see that further research is needed concerning the a priori principles underlying all a posteriori necessities. In the course of this analysis it will emerge that the modal content of a posteriori necessities can be best described in terms of a Finean conception of modality – by giving essences priority over modality. The upshot of this is that we might be able to establish the necessity of certain supposed a posteriori necessities by a priori means.
Poetics of the Nameless Middle : Japan and the West in Philosophy and Music of the Twentieth Century
Resumo:
This study investigates the affinities between philosophy, aesthetics, and music of Japan and the West. The research is based on the structuralist notion (specifically, on that found in the narratology of Algirdas Julius Greimas), that the universal grammar functions as an abstract principle, underlying all kinds of discourse. The study thus aims to demonstrate how this grammar is manifested in philosophical, aesthetic, and musical texts and how the semiotic homogeneity of these texts can be explained on this basis. Totality and belongingness are the key philosophical concepts presented herein. As distinct from logocentrism manifested as substantializations of the world of ideas , god or mind, which was characteristic of previous Western paradigms, totality was defined as the coexistence of opposites. Thus Heidegger, Merleau-Ponty, Dōgen, and Nishida often illustrated it by identifying fundamental polarities, such as being and nothing, seer and seen, truth and illusion, etc. Accordingly, totality was schematically presented as an all-encompassing middle of the semiotic square. Similar values can be found in aesthetics and arts. Instead of dialectic syntagms, differentiated unity is considered as paradigmatic and the study demonstrates how this is manifested in traditional Japanese and Heideggerian aesthetics, as well as in the aspects of music of Claude Debussy and Tōru Takemitsu.
Resumo:
This study examines the Chinese press discussion about democratic centralism in 1978-1981 in newspapers, political journals and academic journals distributed nationwide. It is thus a study of intellectual trends during the Hua Guofeng period and of methods, strategies, and techniques of public political discussion of the time. In addition, this study presents democratic centralism as a comprehensive theory of democracy and evaluates this theory. It compares the Chinese theory of democratic centralism with Western traditions of democracy, not only with the standard liberal theory but also with traditions of participatory and deliberative democracy, in order to evaluate whether the Chinese theory of democratic centralism forms a legitimate theory of democracy. It shows that the Chinese theory comes close to participatory types of democracy and shares a conception of democracy as communication with the theory of deliberative democracy. Therefore, the Chinese experience provides some empirical evidence of the practicability of these traditions of democracy. Simultaneously, this study uses experiences of participatory democracies outside of China to explain some earlier findings about the Chinese practices. This dissertation also compares Chinese theory with some common Western theories and models of Chinese society as well as with Western understandings of Chinese political processes. It thus aims at opening more dialogue between Chinese and Western political theories and understandings about Chinese polity. This study belongs to scholarly traditions of the history of ideas, political philosophy, comparative politics, and China studies. The main finding of this study is that the Chinese theory of democratic centralism is essentially a theory about democracy, but whether its scrupulous practicing alone would be sufficient for making a country a democracy depends on which established definition of democracy one applies and on what kind of democratic deficits are seen as being acceptable within a truly democratic system. Nevertheless, since the Chinese theory of democratic centralism fits well with some established definitions of democracy and since democratic deficits are a reality in all actual democracies, the Chinese themselves are talking about democracy in terms acceptable to Western political philosophy as well.
Models as epistemic artefacts: Toward a non-representationalist account of scientific representation
Resumo:
The purpose of this study is to analyze and develop various forms of abduction as a means of conceptualizing processes of discovery. Abduction was originally presented by Charles S. Peirce (1839-1914) as a "weak", third main mode of inference -- besides deduction and induction -- one which, he proposed, is closely related to many kinds of cognitive processes, such as instincts, perception, practices and mediated activity in general. Both abduction and discovery are controversial issues in philosophy of science. It is often claimed that discovery cannot be a proper subject area for conceptual analysis and, accordingly, abduction cannot serve as a "logic of discovery". I argue, however, that abduction gives essential means for understanding processes of discovery although it cannot give rise to a manual or algorithm for making discoveries. In the first part of the study, I briefly present how the main trend in philosophy of science has, for a long time, been critical towards a systematic account of discovery. Various models have, however, been suggested. I outline a short history of abduction; first Peirce's evolving forms of his theory, and then later developments. Although abduction has not been a major area of research until quite recently, I review some critiques of it and look at the ways it has been analyzed, developed and used in various fields of research. Peirce's own writings and later developments, I argue, leave room for various subsequent interpretations of abduction. The second part of the study consists of six research articles. First I treat "classical" arguments against abduction as a logic of discovery. I show that by developing strategic aspects of abductive inference these arguments can be countered. Nowadays the term 'abduction' is often used as a synonym for the Inference to the Best Explanation (IBE) model. I argue, however, that it is useful to distinguish between IBE ("Harmanian abduction") and "Hansonian abduction"; the latter concentrating on analyzing processes of discovery. The distinctions between loveliness and likeliness, and between potential and actual explanations are more fruitful within Hansonian abduction. I clarify the nature of abduction by using Peirce's distinction between three areas of "semeiotic": grammar, critic, and methodeutic. Grammar (emphasizing "Firstnesses" and iconicity) and methodeutic (i.e., a processual approach) especially, give new means for understanding abduction. Peirce himself held a controversial view that new abductive ideas are products of an instinct and an inference at the same time. I maintain that it is beneficial to make a clear distinction between abductive inference and abductive instinct, on the basis of which both can be developed further. Besides these, I analyze abduction as a part of distributed cognition which emphasizes a long-term interaction with the material, social and cultural environment as a source for abductive ideas. This approach suggests a "trialogical" model in which inquirers are fundamentally connected both to other inquirers and to the objects of inquiry. As for the classical Meno paradox about discovery, I show that abduction provides more than one answer. As my main example of abductive methodology, I analyze the process of Ignaz Semmelweis' research on childbed fever. A central basis for abduction is the claim that discovery is not a sequence of events governed only by processes of chance. Abduction treats those processes which both constrain and instigate the search for new ideas; starting from the use of clues as a starting point for discovery, but continuing in considerations like elegance and 'loveliness'. The study then continues a Peircean-Hansonian research programme by developing abduction as a way of analyzing processes of discovery.
Quantum Metaphysics : The Role of Human Beings within the Paradigms of Classical and Quantum Physics
Resumo:
This work offers a systematic phenomenological investigation of the constitutive significance of embodiment. It provides detailed analyses of subjectivity in relation to itself, to others, and to objective reality, and it argues that these basic structures cannot be made intelligible unless one takes into account how they are correlated with an embodied subject. The methodological and conceptual starting point of the treatise is the philosophy of Edmund Husserl. The investigation employs the phenomenological method and uses the descriptions and analyses provided by Husserl and his successors. The treatise is motivated and outlined systematically, and textual exegesis serves as a means for the systematic phenomenological investigation. The structure of the work conforms to the basic relations of subjectivity. The first part of the thesis explores the intimate relation between lived-body and selfhood, analyzes the phenomena of localization, and argues that self-awareness is necessarily and fundamentally embodied self-awareness. The second part examines the intersubjective dimensions of embodiment, investigates the corporal foundations of empathy, and unravels the bodily aspects of transcendental intersubjectivity. The third part scrutinizes the role of embodiment in the constitution of the surrounding objective reality: it focuses on the complex relationship between transcendental subjectivity and transcendental intersubjectivity, carefully examines the normative aspects of genetic and generative self-constitution, and argues eventually that what Husserl calls the paradox of subjectivity originates in a tension between primordial and intersubjective normativity. The work thus reinterprets the paradox of subjectivity in terms of a normative tension, and claims that the paradox is ultimately rooted in the structures of embodiment. In this manner, as a whole, the work discloses the constitutive significance of embodiment, and argues that transcendental subjectivity must be fundamentally embodied.
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
The aim of this study is to examine the relationship of the Roman villa to its environment. The villa was an important feature of the countryside intended both for agricultural production and for leisure. Manuals of Roman agriculture give instructions on how to select a location for an estate. The ideal location was a moderate slope facing east or south in a healthy area and good neighborhood, near good water resources and fertile soils. A road or a navigable river or the sea was needed for transportation of produce. A market for selling the produce, a town or a village, should have been nearby. The research area is the surroundings of the city of Rome, a key area for the development of the villa. The materials used consist of archaeological settlement sites, literary and epigraphical evidence as well as environmental data. The sites include all settlement sites from the 7th century BC to 5th century AD to examine changes in the tradition of site selection. Geographical Information Systems were used to analyze the data. Six aspects of location were examined: geology, soils, water resources, terrain, visibility/viewability and relationship to roads and habitation centers. Geology was important for finding building materials and the large villas from the 2nd century BC onwards are close to sources of building stones. Fertile soils were sought even in the period of the densest settlement. The area is rich in water, both rainfall and groundwater, and finding a water supply was fairly easy. A certain kind of terrain was sought over very long periods: a small spur or ridge shoulder facing preferably south with an open area in front of the site. The most popular villa resorts are located on the slopes visible from almost the entire Roman region. A visible villa served the social and political aspirations of the owner, whereas being in the villa created a sense of privacy. The area has a very dense road network ensuring good connectivity from almost anywhere in the region. The best visibility/viewability, dense settlement and most burials by roads coincide, creating a good neighborhood. The locations featuring the most qualities cover nearly a quarter of the area and more than half of the settlement sites are located in them. The ideal location was based on centuries of practical experience and rationalized by the literary tradition.
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
The subject of doctoral thesis is the analysis and interpretation of instrumental pieces composed by Einojuhani Rautavaara (b. 1928) that have been given angelic titles: Archangel Michael Fighting the Antichrist from the suite Icons (1955)/Before the Icons (2006), Angels and Visitations (1978), the Double Bass Concerto Angel of Dusk (1980), Playgrounds for Angels (1981)and the Seventh Symphony Angel of Light (1994). The aim of the work is to find those musical elements common to these pieces that distinguish them from Rautavaara s other works and to determine if they could be thought of as a series. I prove that behind the common elements and titles stands the same extramusical idea the figure of an angel that the composer has described in his commentaries. The thesis is divided into three parts. Since all of the compositions possess titles that refer to the spiritual symbol of an angel, the first part offers a theoretical background to demonstrate the significant role played by angels in various religions and beliefs, and the means by which music has attempted to represent this symbol throughout history. This background traces also Rautavaara s aesthetic attitude as a spiritual composer whose output can be studied with reference to his extramusical interests including literature, psychology, painting, philosophy and myths. The second part focuses on the analysis of the instrumental compositions with angelic titles, without giving consideration to their commentaries and titles. The analyses concentrate in particular on those musical features that distinguish these pieces from Rautavaara s other compositions. In the third part these musical features are interpreted as symbols of the angel through comparison with vocal and instrumental pieces which contain references to the character of an angel, structures of mythical narration, special musical expressions, use of instruments and aspects of brightness. Finally I explore the composer s interpretative codes, drawing on Rilke s cycle of poems Ten Duino Elegies and Jung s theory of archetypes, and analyze the instrumental pieces with angelic titles in the light of the theory of musical ekphrasis.
Resumo:
This study is an inquiry into three related topics in Aristotle’s psychology: the perception of seeing, the perception of past perception, and the perception of sleeping. Over the past decades, Aristotle’s account of the perception of perception has been studied in numerous articles and chapters of books. However, there is no monograph that attempts to give a comprehensive analysis of this account and to assess its relation and significance to Aristotle’s psychological theory in general as well as to other theories pertaining to the topics (e.g. theories of consciousness), be they ancient, medieval, modern, or contemporary. This study intends to fill this gap and to further the research into Aristotle’s philosophy and into the philosophy of mind. The present study is based on an accurate analysis of the sources, on their Platonic background, and on later interpretations within the commentary tradition up to the present. From a methodological point of view, this study represents systematically orientated research into the history of philosophy, in which special attention is paid to the philosophical problems inherent in the sources, to the distinctions drawn, and to the arguments put forward as well as to their philosophical assessment. In addition to contributing many new findings concerning the topics under discussion, this study shows that Aristotle’s account of the perception of perception substantially differs from many later theories of consciousness. This study also suggests that Aristotle be regarded as a consistent direct realist, not only in respect of sense perception, but also in respect of memory.