28 resultados para arguments against
em Helda - Digital Repository of University of Helsinki
Resumo:
The purpose of this study is to analyze and develop various forms of abduction as a means of conceptualizing processes of discovery. Abduction was originally presented by Charles S. Peirce (1839-1914) as a "weak", third main mode of inference -- besides deduction and induction -- one which, he proposed, is closely related to many kinds of cognitive processes, such as instincts, perception, practices and mediated activity in general. Both abduction and discovery are controversial issues in philosophy of science. It is often claimed that discovery cannot be a proper subject area for conceptual analysis and, accordingly, abduction cannot serve as a "logic of discovery". I argue, however, that abduction gives essential means for understanding processes of discovery although it cannot give rise to a manual or algorithm for making discoveries. In the first part of the study, I briefly present how the main trend in philosophy of science has, for a long time, been critical towards a systematic account of discovery. Various models have, however, been suggested. I outline a short history of abduction; first Peirce's evolving forms of his theory, and then later developments. Although abduction has not been a major area of research until quite recently, I review some critiques of it and look at the ways it has been analyzed, developed and used in various fields of research. Peirce's own writings and later developments, I argue, leave room for various subsequent interpretations of abduction. The second part of the study consists of six research articles. First I treat "classical" arguments against abduction as a logic of discovery. I show that by developing strategic aspects of abductive inference these arguments can be countered. Nowadays the term 'abduction' is often used as a synonym for the Inference to the Best Explanation (IBE) model. I argue, however, that it is useful to distinguish between IBE ("Harmanian abduction") and "Hansonian abduction"; the latter concentrating on analyzing processes of discovery. The distinctions between loveliness and likeliness, and between potential and actual explanations are more fruitful within Hansonian abduction. I clarify the nature of abduction by using Peirce's distinction between three areas of "semeiotic": grammar, critic, and methodeutic. Grammar (emphasizing "Firstnesses" and iconicity) and methodeutic (i.e., a processual approach) especially, give new means for understanding abduction. Peirce himself held a controversial view that new abductive ideas are products of an instinct and an inference at the same time. I maintain that it is beneficial to make a clear distinction between abductive inference and abductive instinct, on the basis of which both can be developed further. Besides these, I analyze abduction as a part of distributed cognition which emphasizes a long-term interaction with the material, social and cultural environment as a source for abductive ideas. This approach suggests a "trialogical" model in which inquirers are fundamentally connected both to other inquirers and to the objects of inquiry. As for the classical Meno paradox about discovery, I show that abduction provides more than one answer. As my main example of abductive methodology, I analyze the process of Ignaz Semmelweis' research on childbed fever. A central basis for abduction is the claim that discovery is not a sequence of events governed only by processes of chance. Abduction treats those processes which both constrain and instigate the search for new ideas; starting from the use of clues as a starting point for discovery, but continuing in considerations like elegance and 'loveliness'. The study then continues a Peircean-Hansonian research programme by developing abduction as a way of analyzing processes of discovery.
Resumo:
This thesis is an assessment of the hoax hypothesis, mainly propagated in Stephen C. Carlson's 2005 monograph "The Gospel Hoax: Morton Smith's Invention of Secret Mark", which suggests that professor Morton Smith (1915-1991) forged Clement of Alexandria's letter to Theodore. This letter Smith claimed to have discovered as an 18th century copy in the monastery of Mar Saba in 1958. The Introduction narrates the discovery story of Morton Smith and traces the manuscript's whereabouts up to its apparent disappearance in 1990 following with a brief history of scholarship of the MS and some methodological considerations. Chapters 2 and 3 deal with the arguments for the hoax (mainly by Stephen C. Carlson) and against it (mainly Scott G. Brown). Chapter 2 looks at the MS in its physical aspects, and chapter 3 assesses its subject matter. I conclude that some of the details fit reasonably well with the hoax hypothesis, but on the whole the arguments against it are more persuasive. Especially Carlson's use of QDE-analysis (Questioned Document Examination) has many problems. Comparing the handwriting of Clement's letter to Morton Smith's handwriting I conclude that there are some "repeated differences" between them suggesting that Smith is not the writer of the disputed letter. Clement's letter to Theodore derives most likely from antiquity though the exact details of its character are not discussed in length in this thesis. In Chapter 4 I take a special look at Stephen C. Carlson's arguments which propose that Morton Smith hid clues of his identity to the MS and the materials surrounding it. Comparing these alleged clues to known pseudoscientific works I conclude that Carlson utilizes here methods normally reserved for building a conspiracy theory; thus Carlson's hoax hypothesis has serious methodological flaws in respect to these hidden clues. I construct a model of these questionable methods titled "a boisterous pseudohistorical method" that contains three parts: 1) beginning with a question that from the beginning implicitly contains the answer, 2) considering everything will do as evidence for the conspiracy theory, and 3) abandoning probability and thinking literally that everything is connected. I propose that Stephen C. Carlson utilizes these pseudoscientific methods in his unearthing of Morton Smith's "clues". Chapter 5 looks briefly at the literary genre I title "textual puzzle -thriller". Because even biblical scholarship follows the signs of the times, I propose Carlson's hoax hypothesis has its literary equivalents in fiction in titles like Dan Brown's "Da Vinci Code" and in academic works in titles like John Dart's "Decoding Mark". All of these are interested in solving textual puzzles, even though the methodological choices are not acceptable for scholarship. Thus the hoax hypothesis as a whole is alternatively either unpersuasive or plain bad science.
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
In this paper I offer a counterexample to the so called vagueness argument against restricted composition. This will be done in the lines of a recent suggestion by Trenton Merricks, namely by challenging the claim that there cannot be a sharp cut-off point in a composition sequence. It will be suggested that causal powers which emerge when composition occurs can serve as an indicator of such sharp cut-off points. The main example will be the case of a heap. It seems that heaps might provide a very plausible counterexample to the vagueness argument if we accept the idea that four grains of sand is the least number required to compose a heap—the case has been supported by W. D. Hart. My purpose here is not to put forward a new theory of composition, I only wish to refute the vagueness argument and point out that we should be wary of arguments of its form.
Resumo:
Colorectal cancer (CRC) is a major health concern and demands long-term efforts in developing strategies for screening and prevention. CRC has become a preventable disease as a consequence of a better understanding of colorectal carcinogenesis. However, current therapy is unsatisfactory and necessitates the exploration of other approaches for the prevention and treatment of cancer. Plant based products have been recognized as preventive with regard to the development of colon cancer. Therefore, the potential chemopreventive use and mechanism of action of Lebanese natural product were evaluated. Towards this aim the antitumor activity of Onopordum cynarocephalum and Centaurea ainetensis has been studied using in vitro and in vivo models. In vitro, both crude extracts were non cytotoxic to normal intestinal cells and inhibited the proliferation of colon cancer cells in a dose-dependent manner. In vivo, both crude extracts reduced the number of tumors by an average of 65% at weeks 20 (adenomas stage) and 30 (adenocarcinomas stage). The activity of the C. ainetensis extract was attributed to Salograviolide A, a guaianolide-type sesquiterpene lactone, which was isolated and identified through bio-guided fractionation. The mechanism of action of thymoquinone (TQ), the active component of Nigella sativa, was established in colon cancer cells using in vitro models. By the use of N-acetyl cysteine, a radical scavenger, the direct involvement of reactive oxygen species in TQ-induced apoptotic cells was established. The analytical detection of TQ from spiked serum and its protein binding were evaluated. The average recovery of TQ from spiked serum subjected to several extraction procedures was 2.5% proving the inability of conventional methods to analyze TQ from serum. This has been explained by the extensive binding (>98%) of TQ to serum and major serum components such as bovine serum albumin (BSA) and alpha-1-acid glycoprotein (AGP). Using mass spectrometry analysis, TQ was confirmed to bind covalently to the free cysteine in position 34 and 147 of the amino acid sequence of BSA and AGP, respectively. The results of this work put at the disposal for future development new plants with anti-cancer activities and enhance the understanding of the pharmaceutical properties of TQ, a prerequisite for its future clinical development.
Resumo:
This study concentrates on the contested concept of pastiche in literary studies. It offers the first detailed examination of the history of the concept from its origins in the seventeenth century to the present, showing how pastiche emerged as a critical concept in interaction with the emerging conception of authorial originality and the copyright laws protecting it. One of the key results of this investigation is the contextualisation of the postmodern debate on pastiche. Even though postmodern critics often emphasise the radical novelty of pastiche, they in fact resuscitate older positions and arguments without necessarily reflecting on their historical conditions. This historical background is then used to analyse the distinction between the primarily French conception of pastiche as the imitation of style and the postmodern notion of it as the compilation of different elements. The latter s vagueness and inclusiveness detracts from its value as a critical concept. The study thus concentrates on the notion of stylistic pastiche, challenging the widespread prejudice that it is merely an indication of lack of talent. Because it is multiply based on repetition, pastiche is in fact a highly ambiguous or double-edged practice that calls into question the distinction between repetition and original, thereby undermining the received notion of individual unique authorship as a fundamental aesthetic value. Pastiche does not, however, constitute a radical upheaval of the basic assumptions on which the present institution of literature relies, since, in order to mark its difference, pastiche always refers to a source outside itself against which its difference is measured. Finally, the theoretical analysis of pastiche is applied to literary works. The pastiches written by Marcel Proust demonstrate how it can become an integral part of a writer s poetics: imitation of style is shown to provide Proust with a way of exploring the role of style as a connecting point between inner vision and reality. The pastiches of the Sherlock Holmes stories by Michael Dibdin, Nicholas Meyer and the duo Adrian Conan Doyle and John Dickson Carr illustrate the functions of pastiche within a genre detective fiction that is itself fundamentally repetitive. A.S. Byatt s Possession and D.M. Thomas s Charlotte use Victorian pastiches to investigate the conditions of literary creation in the age of postmodern suspicion of creativity and individuality. The study thus argues that the concept of pastiche has valuable insights to offer to literary criticism and theory, and that literary pastiches, though often dismissed in reviews and criticism, are a particularly interesting object of study precisely because of their characteristic ambiguity.
Resumo:
Aptitude-based student selection: A study concerning the admission processes of some technically oriented healthcare degree programmes in Finland (Orthotics and Prosthetics, Dental Technology and Optometry). The data studied consisted of conveniencesamples of preadmission information and the results of the admission processes of three technically oriented healthcare degree programmes (Orthotics and Prosthetics, Dental Technology and Optometry) in Finland during the years 1977-1986 and 2003. The number of the subjects tested and interviewed in the first samples was 191, 615 and 606, and in the second 67, 64 and 89, respectively. The questions of the six studies were: I. How were different kinds of preadmission data related to each other? II. Which were the major determinants of the admission decisions? III. Did the graduated students and those who dropped out differ from each other? IV. Was it possible to predict how well students would perform in the programmes? V. How was the student selection executed in the year 2003? VI. Should clinical vs. statistical prediction or both be used? (Some remarks are presented on Meehl's argument: "Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word.") The main results of the study were as follows: Ability tests, dexterity tests and judgements of personality traits (communication skills, initiative, stress tolerance and motivation) provided unique, non-redundant information about the applicants. Available demographic variables did not bias the judgements of personality traits. In all three programme settings, four-factor solutions (personality, reasoning, gender-technical and age-vocational with factor scores) could be extracted by the Maximum Likelihood method with graphical Varimax rotation. The personality factor dominated the final aptitude judgements and very strongly affected the selection decisions. There were no clear differences between graduated students and those who had dropped out in regard to the four factors. In addition, the factor scores did not predict how well the students performed in the programmes. Meehl's argument on the uncertainty of clinical prediction was supported by the results, which on the other hand did not provide any relevant data for rules on statistical prediction. No clear arguments for or against the aptitude-based student selection was presented. However, the structure of the aptitude measures and their impact on the admission process are now better known. The concept of "personal aptitude" is not necessarily included in the values and preferences of those in charge of organizing the schooling. Thus, obviously the most well-founded and cost-effective way to execute student selection is to rely on e.g. the grade point averages of the matriculation examination and/or written entrance exams. This procedure, according to the present study, would result in a student group which has a quite different makeup (60%) from the group selected on the basis of aptitude tests. For the recruiting organizations, instead, "personal aptitude" may be a matter of great importance. The employers, of course, decide on personnel selection. The psychologists, if consulted, are responsible for the proper use of psychological measures.
Resumo:
Candida yeast species are widespread opportunistic microbes, which are usually innocent opportunists unless the systemic or local defense system of the host becomes compromised. When they adhere on a fertile substrate such as moist and warm, protein-rich human mucosal membrane or biomaterial surface, they become activated and start to grow pseudo and real hyphae. Their growth is intricately guided by their ability to detect surface defects (providing secure hiding , thigmotropism) and nutrients (source of energy, chemotropism). The hypothesis of this work was that body mobilizes both non-specific and specific host defense against invading candidal cells and that these interactions involve resident epithelial cells, rapidly responding non-specific protector neutrophils and mast cells as well as the antigen presenting and responding den-dritic cell lymphocyte plasma cell system. It is supposed that Candida albicans, as a result of dar-winistic pressure, has developed or is utilizing strategies to evade these host defense reactions by e.g. adhering to biomaterial surfaces and biofilms. The aim of the study was to assess the host defense by taking such key molecules of the anti-candidal defense into focus, which are also more or less characteristic for the main cellular players in candida-host cell interactions. As a model for candidal-host interaction, sections of chronic hyperplastic candidosis were used and compared with sections of non-infected leukoplakia and healthy tissue. In this thesis work, neutrophil-derived anti-candidal α-defensin was found in the epithelium, not only diffusely all over in the epithelium, but as a strong α-defensin-rich superficial front probably able to slow down or prevent penetration of candida into the epithelium. Neutrophil represents the main host defence cell in the epithelium, to which it can rapidly transmigrate from the circulation and where it forms organized multicellular units known as microabscesses (study I). Neutrophil chemotactic inter-leukin-8 (IL-8) and its receptor (IL-8R) were studied and were surprisingly also found in the candidal cells, probably helping the candida to keep away from IL-8- and neutrophil-rich danger zones (study IV). Both leukocytes and resident epithelial cells contained TLR2, TLR4 and TLR6 receptors able to recognize candidal structures via utilization of receptors similar to the Toll of the banana fly. It seems that candida can avoid host defence via stimulation of the candida permissive TLR2 instead of the can-dida injurious TLR4 (study V). TLR also provides the danger signal to the immune system without which it will not be activated to specifically respond against candidal antigens. Indeed, diseased sites contained receptor activator of nuclear factor kappa B ligand (RANKL; II study), which is important for the antigen capturing, processing and presenting dendritic cells and for the T lymphocyte activation (study III). Chronic hyperplastic candidosis provides a disease model that is very useful to study local and sys-temic host factors, which under normal circumstances restrain C. albicans to a harmless commensal state, but failure of which in e.g. HIV infection, cancer and aging may lead to chronic infection.
Resumo:
Rituximab, a monoclonal antibody against B-cell specific CD20 antigen, is used for the treatment of non-Hodgkin lymphomas (NHL) and chronic lymphatic leukemia. In combination with chemotherapeutics rituximab has remarkably improved the outcome of NHL patients, but a vast variation in the lengths of remissions remains and the outcome of individual patients is difficult to predict. This thesis has searched for an explanation for this by studying the effector mechanisms of rituximab and by comparing gene expression in lymphoma tissue samples of patients with long- and short-term survival. This work demonstrated that activation of complement (C) system is in vitro more efficient effector mechanism of rituximab than cellular mechanisms or apoptosis. Activation of the C system was also shown in vivo during rituximab treatment. However, intravenously administered rituximab could not enter the cerebrospinal fluid, and neither C activation nor removal of lymphoma cells was observed in central nervous system. In vitro cytotoxicity assays showed that rituximab-induced cell killing could be markedly improved with simultaneous neutralization of the C regulatory proteins CD46 (Membrane cofactor protein), CD55 (Decay-accelerating factor), and CD59 (protectin). In a retrospective study of follicular lymphoma (FL) patients, low lymphoma tissue mRNA expressions of CD59 and CD55 were associated with a good prognosis and in a progressive flow cytometry study high expression of CD20 relative to CD55 was correlated to a longer progression free survival. Gene expression profile analysis revealed that expression of certain often cell cycle, signal transduction or immune response related genes correlate with clinical outcome of FL patients. Emphasizing the role of tumor microenvironment the best differentiating genes Smad1 and EphA1 were demonstrated to be mainly expressed in the non-malignant cells of tumors. In conclusion, this thesis shows that activation of the C system is a clinically important effector mechanism of rituximab and that microenvironment factor in tumors and expression of C regulatory proteins affect markedly the efficacy of immunochemotherapy. This data can be used to identify more accurately the patients for whom immunochemotherapy is given. It may also be beneficial in development of rituximab-containing and other monoclonal antibody therapies against cancer.
Resumo:
Marketing of goods under geographical names has always been common. Aims to prevent abuse have given rise to separate forms of legal protection for geographical indications (GIs) both nationally and internationally. The European Community (EC) has also gradually enacted its own legal regime to protect geographical indications. The legal protection of GIs has traditionally been based on the idea that geographical origin endows a product exclusive qualities and characteristics. In today s world we are able to replicate almost any prod-uct anywhere, including its qualities and characteristics. One would think that this would preclude protec-tion from most geographical names, yet the number of geographical indications seems to be rising. GIs are no longer what they used to be. In the EC it is no longer required that a product is endowed exclusive characteristics by its geographical origin as long as consumers associate the product with a certain geo-graphical origin. This departure from the traditional protection of GIs is based on the premise that a geographical name extends beyond and exists apart from the product and therefore deserves protection itself. The thesis tries to clearly articulate the underlying reasons, justifications, principles and policies behind the protection of GIs in the EC and then scrutinise the scope and shape of the GI system in the light of its own justifications. The essential questions it attempts to aswer are (1) What is the basis and criteria for granting GI rights? (2) What is the scope of protection afforded to GIs? and (3) Are these both justified in the light of the functions and policies underlying granting and protecting of GIs? Despite the differences, the actual functions of GIs are in many ways identical to those of trade marks. Geographical indications have a limited role as source and quality indicators in allowing consumers to make informed and efficient choices in the market place. In the EC this role is undermined by allowing able room and discretion for uses that are arbitrary. Nevertheless, generic GIs are unable to play this role. The traditional basis for justifying legal protection seems implausible in most case. Qualities and charac-teristics are more likely to be related to transportable skill and manufacturing methods than the actual geographical location of production. Geographical indications are also incapable of protecting culture from market-induced changes. Protection against genericness, against any misuse, imitation and evocation as well as against exploiting the reputation of a GI seem to be there to protect the GI itself. Expanding or strengthening the already existing GI protection or using it to protect generic GIs cannot be justified with arguments on terroir or culture. The conclusion of the writer is that GIs themselves merit protection only in extremely rare cases and usually only the source and origin function of GIs should be protected. The approach should not be any different from one taken in trade mark law. GI protection should not be used as a means to mo-nopolise names. At the end of the day, the scope of GI protection is nevertheless a policy issue.
Resumo:
What is a miracle and what can we know about miracles? A discussion of miracles in anglophone philosophy of religion literature since the late 1960s. The aim of this study is to systematically describe and philosophically examine the anglophone discussion on the subject of miracles since the latter half of the 1960s. The study focuses on two salient questions: firstly, what I will term the conceptual-ontological question of the extent to which we can understand miracles and, secondly, the epistemological question of what we can know about miracles. My main purpose in this study is to examine the various viewpoints that have been submitted in relation to these questions, how they have been argued and on what presuppositions these arguments have been based. In conducting the study, the most salient dimension of the various discussions was found to relate to epistemological questions. In this regard, there was a notable confrontation between those scholars who accept miracles and those who are sceptical of them. On the conceptual-ontological side I recognised several different ways of expressing the concept of miracle . I systematised the discussion by demonstrating the philosophical boundaries between these various opinions. The first and main boundary was related to ontological knowledge. On one side of this boundary I placed the views which were based on realism and objectivism. The proponents of this view assumed that miraculousness is a real property of a miraculous event regardless of how we can perceive it. On the other side I put the views which tried to define miraculousness in terms of subjectivity, contextuality and epistemicity. Another essential boundary which shed light on the conceptual-ontological discussion was drawn in relation to two main views of nature. The realistic-particularistic view regards nature as a certain part of reality. The adherents of this presupposition postulate a supernatural sphere alongside nature. Alternatively, the nominalist-universalist view understands nature without this kind of division. Nature is understood as the entire and infinite universe; the whole of reality. Other, less important boundaries which shed light on the conceptual-ontological discussion were noted in relation to views regarding the laws of nature, for example. I recognised that the most important differences between the epistemological approaches were in the different views of justification, rationality, truth and science. The epistemological discussion was divided into two sides, distinguished by their differing assumptions in relation to the need for evidence. Adherents of the first (and noticeably smaller) group did not see any epistemological need to reach a universal and common opinion about miracles. I discovered that these kinds of views, which I called non-objectivist, had subjectivist and so-called collectivist views of justification and a contextualist view of rationality. The second (and larger) group was mainly interested in discerning the grounds upon which to establish an objective and conclusive common view in relation to the epistemology of miracles. I called this kind of discussion an objectivist discussion and this kind of approach an evidentialist approach. Most of the evidentialists tried to defend miracles and the others attempted to offer evidence against miracles. Amongst both sides, there were many different variations according to emphasis and assumption over how they saw the possibilities to prove their own view. The common characteristic in all forms of evidentialism was a commitment to an objectivist notion of rationality and a universalistic notion of justification. Most evidentialists put their confidence in science in one way or another. Only a couple of philosophers represented the most moderate version of evidentialism; they tried to remove themselves from the apparent controversy and contextualised the different opinions in order to make some critical comments on them. I called this kind of approach a contextualising form of evidentialism. In the final part of the epistemological chapter, I examined the discussion about the evidential value of miracles, but nothing substantially new was discovered concerning the epistemological views of the authors.
Resumo:
This study presents a systematical analysis of biochemist Michael Behe's thinking. Behe is a prominent defender of the Intelligent Design Movement which has gaines influence particularly in the United States, but also in elsewhere. At the core of his thinking is the idea of intelligent design, according to which the order of the cosmos and of living things is the handiwork of a non-human intelligence. This "design argument" had previously been popular in the tradition of natural theology. Behe attempts to base his argument on the findings of 20th century biology, however. It has been revealed by biochemistry that cells, formerly thought to be simple, in fact contain complex structures, for instance the bacterial flagellum, which are reminiscent of the machines built by humans. According to Behe these can be believably explained only by referring to intelligent design, not by invoking darwinian natural laws. My analysis aims to understand Behe's thought on intelligent design, to bring forward its connections to intellectual history and worldviews, and to study whether Behe has formulated his argument so as to avoid common criticisms directed against design arguments. I use a large amount literature and refer to diverse writers participating in the intelligent design debate. The results of the analysis are as follows. Behe manages to avoid a large amount of classical criticisms against the design argument, and new criticisms have to be developed to meet his argument. Secondly, positions on intelligent design appear to be linked to larger philosophical and religious worldviews.vaan myös maailmankuvat ja uskonnolliset näkemykset.
Resumo:
Democratic Legitimacy and the Politics of Rights is a research in normative political theory, based on comparative analysis of contemporary democratic theories, classified roughly as conventional liberal, deliberative democratic and radical democratic. Its focus is on the conceptual relationship between alternative sources of democratic legitimacy: democratic inclusion and liberal rights. The relationship between rights and democracy is studied through the following questions: are rights to be seen as external constraints to democracy or as objects of democratic decision making processes? Are individual rights threatened by public participation in politics; do constitutionally protected rights limit the inclusiveness of democratic processes? Are liberal values such as individuality, autonomy and liberty; and democratic values such as equality, inclusion and popular sovereignty mutually conflictual or supportive? Analyzing feminist critique of liberal discourse, the dissertation also raises the question about Enlightenment ideals in current political debates: are the universal norms of liberal democracy inherently dependent on the rationalist grand narratives of modernity and incompatible with the ideal of diversity? Part I of the thesis introduces the sources of democratic legitimacy as presented in the alternative democratic models. Part II analyses how the relationship between rights and democracy is theorized in them. Part III contains arguments by feminists and radical democrats against the tenets of universalist liberal democratic models and responds to that critique by partly endorsing, partly rejecting it. The central argument promoted in the thesis is that while the deconstruction of modern rationalism indicates that rights are political constructions as opposed to externally given moral constraints to politics, this insight does not delegitimize the politics of universal rights as an inherent part of democratic institutions. The research indicates that democracy and universal individual rights are mutually interdependent rather than oppositional; and that democracy is more dependent on an unconditional protection of universal individual rights when it is conceived as inclusive, participatory and plural; as opposed to robust majoritarian rule. The central concepts are: liberalism, democracy, legitimacy, deliberation, inclusion, equality, diversity, conflict, public sphere, rights, individualism, universalism and contextuality. The authors discussed are e.g. John Rawls, Jürgen Habermas, Seyla Benhabib, Iris Young, Chantal Mouffe and Stephen Holmes. The research focuses on contemporary political theory, but the more classical work of John S. Mill, Benjamin Constant, Isaiah Berlin and Hannah Arendt is also included.