60 resultados para Arguments
Resumo:
Aptitude-based student selection: A study concerning the admission processes of some technically oriented healthcare degree programmes in Finland (Orthotics and Prosthetics, Dental Technology and Optometry). The data studied consisted of conveniencesamples of preadmission information and the results of the admission processes of three technically oriented healthcare degree programmes (Orthotics and Prosthetics, Dental Technology and Optometry) in Finland during the years 1977-1986 and 2003. The number of the subjects tested and interviewed in the first samples was 191, 615 and 606, and in the second 67, 64 and 89, respectively. The questions of the six studies were: I. How were different kinds of preadmission data related to each other? II. Which were the major determinants of the admission decisions? III. Did the graduated students and those who dropped out differ from each other? IV. Was it possible to predict how well students would perform in the programmes? V. How was the student selection executed in the year 2003? VI. Should clinical vs. statistical prediction or both be used? (Some remarks are presented on Meehl's argument: "Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word.") The main results of the study were as follows: Ability tests, dexterity tests and judgements of personality traits (communication skills, initiative, stress tolerance and motivation) provided unique, non-redundant information about the applicants. Available demographic variables did not bias the judgements of personality traits. In all three programme settings, four-factor solutions (personality, reasoning, gender-technical and age-vocational with factor scores) could be extracted by the Maximum Likelihood method with graphical Varimax rotation. The personality factor dominated the final aptitude judgements and very strongly affected the selection decisions. There were no clear differences between graduated students and those who had dropped out in regard to the four factors. In addition, the factor scores did not predict how well the students performed in the programmes. Meehl's argument on the uncertainty of clinical prediction was supported by the results, which on the other hand did not provide any relevant data for rules on statistical prediction. No clear arguments for or against the aptitude-based student selection was presented. However, the structure of the aptitude measures and their impact on the admission process are now better known. The concept of "personal aptitude" is not necessarily included in the values and preferences of those in charge of organizing the schooling. Thus, obviously the most well-founded and cost-effective way to execute student selection is to rely on e.g. the grade point averages of the matriculation examination and/or written entrance exams. This procedure, according to the present study, would result in a student group which has a quite different makeup (60%) from the group selected on the basis of aptitude tests. For the recruiting organizations, instead, "personal aptitude" may be a matter of great importance. The employers, of course, decide on personnel selection. The psychologists, if consulted, are responsible for the proper use of psychological measures.
Resumo:
There has been a change in university´s position in society during the last of decades from traditional university to result-based university. Result-based is considered as a steering mechanism. The context in this study is the period when the New Salary System was introduced. In the New Salary System salary is based on the performance appraisal made by the supervisor. The purpose of the study was to understand the discussion of the New Salary System and how this discussion should be interpreted. The research task had two parts. In the first part the objective was to identify how the academic work was conceptualised. In the second part I analyzed how one related to the New Salary System and how this was interpreted in relation to representation of academic work. The research material consisted of webblogs from the year 2005. Webblogs were located in the internet and one had free access to them. Mostly employees from Finnish universities wrote to them. Besides the salary system writers discussed the university and the academic work. Two different ways of talking about the academic work were found in research material. In the first one the academic work was based on community and in the second one on individuality. When community was emphasized writers discussed also science and research and academic traditions such as peer review. When individuality was emphasized writers discussed individual performance and the importance of salary according to one´s performance. The analysis shows that the New Salary System was opposed and supported. Opposition was based on arguments for the traditional university; peer review, truth, academic profession, academic community and university’s autonomy are the most important arguments. Supporters used arguments such as the need to make individual´s performance visible and breaking the existing power structures.
Resumo:
The present study addressed the epistemology of teachers’ practical knowledge. Drawing from the literature, teachers’ practical knowledge is defined as all teachers’ cognitions (e.g., beliefs, values, motives, procedural knowing, and declarative knowledge) that guide their practice of teaching. The teachers’ reasoning that lies behind their practical knowledge is addressed to gain insight into its epistemic nature. I studied six class teachers’ practical knowledge; they teach in the metropolitan region of Helsinki. Relying on the assumptions of the phenomenographic inquiry, I collected and analyzed the data. I analyzed the data in two stages where the first stage involved an abductive procedure, and the second stage an inductive procedure for interpretation, and thus developed the system of categories. In the end, a quantitative analysis nested into the qualitative findings to study the patterns of the teachers’’ reasoning. The results indicated that teachers justified their practical knowledge based on morality and efficiency of action; efficiency of action was found to be presented in two different ways: authentic efficiency and naïve efficiency. The epistemic weight of morality was embedded in what I call “moral care”. The core intention of teachers in the moral care was the commitment that they felt about the “whole character” of students. From this perspective the “dignity” and the moral character of the students should not replaced for any other “instrumental price”. “Caring pedagogy” was the epistemic value of teachers’ reasoning in the authentic efficiency. The central idea in the caring pedagogy was teachers’ intentions to improve the “intellectual properties” of “all or most” of the students using “flexible” and “diverse” pedagogies. However, “regulating pedagogy” was the epistemic condition of practice in the cases corresponding to naïve efficiency. Teachers argued that an effective practical knowledge should regulate and manage the classroom activities, but the targets of the practical knowledge were mainly other “issues “or a certain percentage of the students. In these cases, the teachers’ arguments were mainly based on the notion of “what worked” regardless of reflecting on “what did not work”. Drawing from the theoretical background and the data, teachers’ practical knowledge calls for “praxial knowledge” when they used the epistemic conditions of “caring pedagogy” and “moral care”. It however calls for “practicable” epistemic status when teachers use the epistemic condition of regulating pedagogy. As such, praxial knowledge with the dimensions of caring pedagogy and moral care represents the “normative” perspective on teachers’ practical knowledge, and thus reflects a higher epistemic status in comparison to “practicable” knowledge, which represents a “descriptive” perception toward teachers’ practical knowledge and teaching.
Resumo:
The purpose of this study was to evaluate intensity, productivity and efficiency in agriculture in Finland and show implications for N and P fertiliser management. Environmental concerns relating to agricultural production have been and still are focused on arguments about policies that affect agriculture. These policies constrain production while demand for agricultural products such as food, fibre and energy continuously increase. Therefore the importance of increasing productivity is a great challenge to agriculture. Over the last decades producers have experienced several large changes in the production environment such as the policy reform when Finland joined the EU 1995. Other and market changes occurred with the further EU enlargement with neighbouring countries in 2005 and with the decoupling of supports over the 2006-2007 period. Decreasing prices a decreased number of farmers and decreased profitability in agricultural production have resulted from these changes and constraints and of technological development. It is known that the accession to the EU 1995 would herald changes in agriculture. Especially of interest was how the sudden changes in prices of commodities on especially those of cereals, decreased by 60%, would influence agricultural production. The knowledge of properties of the production function increased in importance as a consequence of price changes. A research on the economic instruments to regulate productions was carried out and combined with earlier studies in paper V. In paper I the objective was to compare two different technologies, the conventional farming and the organic farming, determine differences in productivity and technical efficiency. In addition input specific or environmental efficiencies were analysed. The heterogeneity of agricultural soils and its implications were analysed in article II. In study III the determinants of technical inefficiency were analysed. The aspects and possible effects of the instability in policies due to a partial decoupling of production factors and products were studied in paper IV. Consequently connection between technical efficiency based on the turnover and the sales return was analysed in this study. Simple economic instruments such as fertiliser taxes have a direct effect on fertiliser consumption and indirectly increase the value of organic fertilisers. However, fertiliser taxes, do not fully address the N and P management problems adequately and are therefore not suitable for nutrient management improvements in general. Productivity of organic farms is lower on average than conventional farms and the difference increases when looking at selling returns only. The organic sector needs more research and development on productivity. Livestock density in organic farming increases productivity, however, there is an upper limit to livestock densities on organic farms and therefore nutrient on organic farms are also limited. Soil factors affects phosphorous and nitrogen efficiency. Soils like sand and silt have lower input specific overall efficiency for nutrients N and P. Special attention is needed for the management on these soils. Clay soils and soils with moderate clay content have higher efficiency. Soil heterogeneity is cause for an unavoidable inefficiency in agriculture.
Resumo:
Marketing of goods under geographical names has always been common. Aims to prevent abuse have given rise to separate forms of legal protection for geographical indications (GIs) both nationally and internationally. The European Community (EC) has also gradually enacted its own legal regime to protect geographical indications. The legal protection of GIs has traditionally been based on the idea that geographical origin endows a product exclusive qualities and characteristics. In today s world we are able to replicate almost any prod-uct anywhere, including its qualities and characteristics. One would think that this would preclude protec-tion from most geographical names, yet the number of geographical indications seems to be rising. GIs are no longer what they used to be. In the EC it is no longer required that a product is endowed exclusive characteristics by its geographical origin as long as consumers associate the product with a certain geo-graphical origin. This departure from the traditional protection of GIs is based on the premise that a geographical name extends beyond and exists apart from the product and therefore deserves protection itself. The thesis tries to clearly articulate the underlying reasons, justifications, principles and policies behind the protection of GIs in the EC and then scrutinise the scope and shape of the GI system in the light of its own justifications. The essential questions it attempts to aswer are (1) What is the basis and criteria for granting GI rights? (2) What is the scope of protection afforded to GIs? and (3) Are these both justified in the light of the functions and policies underlying granting and protecting of GIs? Despite the differences, the actual functions of GIs are in many ways identical to those of trade marks. Geographical indications have a limited role as source and quality indicators in allowing consumers to make informed and efficient choices in the market place. In the EC this role is undermined by allowing able room and discretion for uses that are arbitrary. Nevertheless, generic GIs are unable to play this role. The traditional basis for justifying legal protection seems implausible in most case. Qualities and charac-teristics are more likely to be related to transportable skill and manufacturing methods than the actual geographical location of production. Geographical indications are also incapable of protecting culture from market-induced changes. Protection against genericness, against any misuse, imitation and evocation as well as against exploiting the reputation of a GI seem to be there to protect the GI itself. Expanding or strengthening the already existing GI protection or using it to protect generic GIs cannot be justified with arguments on terroir or culture. The conclusion of the writer is that GIs themselves merit protection only in extremely rare cases and usually only the source and origin function of GIs should be protected. The approach should not be any different from one taken in trade mark law. GI protection should not be used as a means to mo-nopolise names. At the end of the day, the scope of GI protection is nevertheless a policy issue.
Resumo:
The aim of the present study is to analyze Confucian understandings of the Christian doctrine of salvation in order to find the basic problems in the Confucian-Christian dialogue. I will approach the task via a systematic theological analysis of four issues in order to limit the thesis to an appropriate size. They are analyzed in three chapters as follows: 1. The Confucian concept concerning the existence of God. Here I discuss mainly the issue of assimilation of the Christian concept of God to the concepts of Sovereign on High (Shangdi) and Heaven (Tian) in Confucianism. 2. The Confucian understanding of the object of salvation and its status in Christianity. 3. The Confucian understanding of the means of salvation in Christianity. Before beginning this analysis it is necessary to clarify the vast variety of controversies, arguments, ideas, opinions and comments expressed in the name of Confucianism; thus, clear distinctions among different schools of Confucianism are given in chapter 2. In the last chapter I will discuss the results of my research in this study by pointing out the basic problems that will appear in the analysis. The results of the present study provide conclusions in three related areas: the tacit differences in the ways of thinking between Confucians and Christians, the basic problems of the Confucian-Christian dialogue, and the affirmative elements in the dialogue. In addition to a summary, a bibliography and an index, there are also eight appendices, where I have introduced important background information for readers to understand the present study.
Resumo:
What is a miracle and what can we know about miracles? A discussion of miracles in anglophone philosophy of religion literature since the late 1960s. The aim of this study is to systematically describe and philosophically examine the anglophone discussion on the subject of miracles since the latter half of the 1960s. The study focuses on two salient questions: firstly, what I will term the conceptual-ontological question of the extent to which we can understand miracles and, secondly, the epistemological question of what we can know about miracles. My main purpose in this study is to examine the various viewpoints that have been submitted in relation to these questions, how they have been argued and on what presuppositions these arguments have been based. In conducting the study, the most salient dimension of the various discussions was found to relate to epistemological questions. In this regard, there was a notable confrontation between those scholars who accept miracles and those who are sceptical of them. On the conceptual-ontological side I recognised several different ways of expressing the concept of miracle . I systematised the discussion by demonstrating the philosophical boundaries between these various opinions. The first and main boundary was related to ontological knowledge. On one side of this boundary I placed the views which were based on realism and objectivism. The proponents of this view assumed that miraculousness is a real property of a miraculous event regardless of how we can perceive it. On the other side I put the views which tried to define miraculousness in terms of subjectivity, contextuality and epistemicity. Another essential boundary which shed light on the conceptual-ontological discussion was drawn in relation to two main views of nature. The realistic-particularistic view regards nature as a certain part of reality. The adherents of this presupposition postulate a supernatural sphere alongside nature. Alternatively, the nominalist-universalist view understands nature without this kind of division. Nature is understood as the entire and infinite universe; the whole of reality. Other, less important boundaries which shed light on the conceptual-ontological discussion were noted in relation to views regarding the laws of nature, for example. I recognised that the most important differences between the epistemological approaches were in the different views of justification, rationality, truth and science. The epistemological discussion was divided into two sides, distinguished by their differing assumptions in relation to the need for evidence. Adherents of the first (and noticeably smaller) group did not see any epistemological need to reach a universal and common opinion about miracles. I discovered that these kinds of views, which I called non-objectivist, had subjectivist and so-called collectivist views of justification and a contextualist view of rationality. The second (and larger) group was mainly interested in discerning the grounds upon which to establish an objective and conclusive common view in relation to the epistemology of miracles. I called this kind of discussion an objectivist discussion and this kind of approach an evidentialist approach. Most of the evidentialists tried to defend miracles and the others attempted to offer evidence against miracles. Amongst both sides, there were many different variations according to emphasis and assumption over how they saw the possibilities to prove their own view. The common characteristic in all forms of evidentialism was a commitment to an objectivist notion of rationality and a universalistic notion of justification. Most evidentialists put their confidence in science in one way or another. Only a couple of philosophers represented the most moderate version of evidentialism; they tried to remove themselves from the apparent controversy and contextualised the different opinions in order to make some critical comments on them. I called this kind of approach a contextualising form of evidentialism. In the final part of the epistemological chapter, I examined the discussion about the evidential value of miracles, but nothing substantially new was discovered concerning the epistemological views of the authors.
Resumo:
This work combines the cognitive theory of folk-theoretical thought with the classical Aristotelian theory of artistic proof in rhetoric. The first half of the work discusses the common ground shared by the elements of artistic proof (logos, pathos, ethos) and the elements of folk-theoretical thought (naïve physics, folk biology, folk psychology, naïve sociology). Combining rhetoric with the cognitive theory of folk-theoretical thought creates a new point of view for argumentation analysis. The logos of an argument can be understood as the inferential relations established between the different parts of an argument. Consequently, within this study the analysis of logos is to be viewed as the analysis of the inferential folk-theoretical elements that make the suggested factual states-of-things appear plausible within given argumentative structures. The pathos of an argumentative structure can be understood as determining the quality of the argumentation in question in the sense that emotive elements play a great part in what can be called a distinction between good and deceptive rhetoric. In the context of this study the analysis of pathos is to be viewed as the analysis of the emotive content of argumentative structures and of whether they aim at facilitating surface- or deep cognitive elaboration of the suggested matters. The ethos of an argumentative structure means both the speaker-presentation and audience-construct that can be discerned within a body of argumentation. In the context of this study, the analysis of ethos is to be understood as the analysis of mutually manifest cognitive environments in the context of argumentation. The theory is used to analyse Catholic Internet discussion concerning cloning. The discussion is divided into six themes: Human Dignity, Sacred Family, Exploitation / Dehumanisation, Playing God, Monsters and Horror Scenarios and Ensoulment. Each theme is analysed for both the rhetorical and the cognitive elements that can be seen creating persuasive force within the argumentative structures presented. It is apparent that the Catholic voices on the Internet extensively oppose cloning. The voices utilise rhetoric that is aggressive and pejorative more often than not. Furthermore, deceptive rhetoric (in the sense presented above) plays a great part in argumentative structures of the Catholic voices. The theory of folk-theoretical thought can be seen as a useful tool for analysing the possible reasons why the Catholic speakers think about cloning and choose to present cloning in their argumentation as they do. The logos utilized in the argumentative structures presented can usually be viewed as based on folk-theoretical inference concerning biology and psychology. The structures of pathos utilized generally appear to aim at generating fear appeal in the assumed audiences, often incorporating counter-intuitive elements. The ethos utilised in the arguments generally revolves around Christian mythology and issues of social responsibility. These structures can also be viewed from the point of view of folk psychology and naïve sociological assumptions.
Resumo:
According to some scientists it is not useful to integrate ethics into research practices. Their claim is that only unethical persons have ethical problems and because of this we must accept ethical misbehaviour as a phenomenon typical of human society. In the present study the argument that the moral personality of scientists explains ethical problems in science is questioned; in addition, the focus is shifted from individuals to the level of the research environment. The question asked is whether the research environment somehow contributes to research ethics violations. To answer this question the focus was turned towards the research environment norms. The aim of the study was to investigate whether or not these norms are consistent with the norms of research ethics, so that it would be possible to evaluate if the research environment supports scientists in their task of meeting the ethical standards of scientific research. In the study the research environment was examined in three parts. The first deals with society especially Finnish society as a research environment. The second deals with the autonomous science institution as a research environment, while the third deals with scientific society (working according to scientific criteria) as a research environment. The conceptual analysis method was used. This means that various normative arguments were analysed, the primary assumptions behind them were recognized, and the acceptability of normative claims was evaluated according to their consistency. The results of the study do not support the claim that ethical violations in science could be satisfactorily explained by referring only to the personal qualities of scientists. The research environment can limit the freedom to follow the ethical principles of science, it can prevent scientists from handling ethical problems openly and from integrating ethical norms effectively into research practices. The norms of research environment are often implicit but nevertheless influence scientific practices. Further, the results indicate that handling ethical questions should be a part of scientific training.
Avioliiton teologia Englannin kirkossa ja Suomen evankelis-luterilaisessa kirkossa vuosina 1963-2006
Resumo:
The theology of marriage in the Church of England(CofE) and in the Evangelical Lutheran Church of Finland(ELCF)1963–2006 The method of the study is a systematic analysis of the sources. In the CofE marriage stems from creation, but it is also sacramental, grounded in the theology of love and redemption. Man and woman have a connection between them that is a mystical union in character because of the one between Christ and the Church; therefore every marriage is sacramental. The purposes of marriage have been expressed in a different order than earlier. A caring relationship and sexuality are set before childbirth as the causes of marriage. The remedial cause of marriage is also moved to the background and it cannot be found in the recent wedding formulas. A personal relationship and marriage as a school of faith and love have a central place in the theology of marriage. The theology of love unites the love of God and marriage. In the CofE the understanding of divorce and co-habiting has changed, too. Co-habiting can now be understood as a stage towards marriage. Divorce has been understood as a phenomenon that must be taken as a fact after an irretrievable breakdown of marriage. Thus the church must concentrate on pastoral care after divorce. Similarly, the ELCF also maintains that the order of creation is the origin of marriage as a lifelong institution. This is also an argument for the solemnization of marriage in the church. Faith and grace are not needed for real marriage because marriage is the culmination of reason and natural law. The society defines marriage and the church gives its blessing to the married couples if so requested. Luther’s view of marriage is different from this because he saw marriage as a school of love and faith, similar to CofE. He saw faith as essential to enable the fullfillment of natural law. Marriage in the ELCF is mostly a matter of natural ethics. An ideal form of life is sought through the Golden Rule. This interpretation of marriage means that it does not presuppose Christian education for children to follow. The doctrine of the two kingdoms is definitely essential as background. It has been impugned by scholars, however, as a permanent foundation of marriage. There is a difference between the marriage formulas and the other sources concerning the purposes of marriage in the ELCF. The formulas do not include sexuality, childbirth or children and their education as purposes of marriage. The formulas include less theological vocabulary than in the CofE. The liturgy indicates the doctrine in CofE. In the Lutheran churches there is not any need to express the doctrine in the wedding formulas. This has resulted in less theology of marriage in the formulas. The theology of Luther is no longer any ruling principle in the theology of marriage. The process of continuing change in society refines the terms for marriage more than the theological arguments do.
Resumo:
This study presents a systematical analysis of biochemist Michael Behe's thinking. Behe is a prominent defender of the Intelligent Design Movement which has gaines influence particularly in the United States, but also in elsewhere. At the core of his thinking is the idea of intelligent design, according to which the order of the cosmos and of living things is the handiwork of a non-human intelligence. This "design argument" had previously been popular in the tradition of natural theology. Behe attempts to base his argument on the findings of 20th century biology, however. It has been revealed by biochemistry that cells, formerly thought to be simple, in fact contain complex structures, for instance the bacterial flagellum, which are reminiscent of the machines built by humans. According to Behe these can be believably explained only by referring to intelligent design, not by invoking darwinian natural laws. My analysis aims to understand Behe's thought on intelligent design, to bring forward its connections to intellectual history and worldviews, and to study whether Behe has formulated his argument so as to avoid common criticisms directed against design arguments. I use a large amount literature and refer to diverse writers participating in the intelligent design debate. The results of the analysis are as follows. Behe manages to avoid a large amount of classical criticisms against the design argument, and new criticisms have to be developed to meet his argument. Secondly, positions on intelligent design appear to be linked to larger philosophical and religious worldviews.vaan myös maailmankuvat ja uskonnolliset näkemykset.
Resumo:
This thesis is an assessment of the hoax hypothesis, mainly propagated in Stephen C. Carlson's 2005 monograph "The Gospel Hoax: Morton Smith's Invention of Secret Mark", which suggests that professor Morton Smith (1915-1991) forged Clement of Alexandria's letter to Theodore. This letter Smith claimed to have discovered as an 18th century copy in the monastery of Mar Saba in 1958. The Introduction narrates the discovery story of Morton Smith and traces the manuscript's whereabouts up to its apparent disappearance in 1990 following with a brief history of scholarship of the MS and some methodological considerations. Chapters 2 and 3 deal with the arguments for the hoax (mainly by Stephen C. Carlson) and against it (mainly Scott G. Brown). Chapter 2 looks at the MS in its physical aspects, and chapter 3 assesses its subject matter. I conclude that some of the details fit reasonably well with the hoax hypothesis, but on the whole the arguments against it are more persuasive. Especially Carlson's use of QDE-analysis (Questioned Document Examination) has many problems. Comparing the handwriting of Clement's letter to Morton Smith's handwriting I conclude that there are some "repeated differences" between them suggesting that Smith is not the writer of the disputed letter. Clement's letter to Theodore derives most likely from antiquity though the exact details of its character are not discussed in length in this thesis. In Chapter 4 I take a special look at Stephen C. Carlson's arguments which propose that Morton Smith hid clues of his identity to the MS and the materials surrounding it. Comparing these alleged clues to known pseudoscientific works I conclude that Carlson utilizes here methods normally reserved for building a conspiracy theory; thus Carlson's hoax hypothesis has serious methodological flaws in respect to these hidden clues. I construct a model of these questionable methods titled "a boisterous pseudohistorical method" that contains three parts: 1) beginning with a question that from the beginning implicitly contains the answer, 2) considering everything will do as evidence for the conspiracy theory, and 3) abandoning probability and thinking literally that everything is connected. I propose that Stephen C. Carlson utilizes these pseudoscientific methods in his unearthing of Morton Smith's "clues". Chapter 5 looks briefly at the literary genre I title "textual puzzle -thriller". Because even biblical scholarship follows the signs of the times, I propose Carlson's hoax hypothesis has its literary equivalents in fiction in titles like Dan Brown's "Da Vinci Code" and in academic works in titles like John Dart's "Decoding Mark". All of these are interested in solving textual puzzles, even though the methodological choices are not acceptable for scholarship. Thus the hoax hypothesis as a whole is alternatively either unpersuasive or plain bad science.
Resumo:
The aim of this research was to study how European churches contributed to the shaping of the Constitutional Treaty during the work of the Convention on the future of Europe through the public discussion forum, established by the Convention for this specific purpose in the years 2002 2003. In particular, this study sought to uncover the areas of interest brought up by the churches in their contributions, the objectives they pursued, and the approaches and arguments they employed to reach those objectives. The data for this study comprised all official submissions by European churches and church alliances to the Forum, totalling 21 contributions. A central criterion for inclusion of the data was that the organization can reasonably be assumed to represent the official position of one or more Christian churches within the European Union before the 2004 expansion. The contributing churches and organizations represent the vast majority of Christians in Europe. The data was analyzed using primarily qualitative content analysis. The research approach was a combination of abductive and inductive inference. Based on the analysis a two-fold theoretical framework was adopted, focusing on theories of public religion, secularization and deprivatization of religion, and of legitimation and collective identity. The main areas of interest found in the contributions of the churches were the value foundation of the European Union, which is demanded to coherently permeate all policies and actions of the EU, and the social dimension of Europe, which must be given equal status to the political and economic dimensions. In both areas the churches claim significant experience and expertise, which they want to see recognized in the Constituional Treaty through a formally guaranteed status for churches and religious communities in the EU. In their contributions the churches show a strong determination to secure a significant role for both religion and religious communities in the public life of Europe. As for the role of religion, they point out to its potential as a motivating and cohesive force in society and as a building block for a collective European identity, which is still missing. Churches also pursue a substantial public role for themselves beyond the spiritual dimension, permeating the secular areas of the social, political and economic dimensions. The arguments in suppport of such role are embedded in their interest and expertise in spiritual and other fundamental values and their broad involvement in providing social services. In this context churches use expressions inclusive of all religions and convictions, albeit clearly advocating the primacy of Europe's Christian heritage. Based on their historical role, their social involvement and their spiritual mission they use the public debate on the Constitutional Treaty to gain formal legitimacy for the public status of religion and religious communities, both nationally and on a European level, through appropriate provisions in the constitutional text. In return they offer the European Union ways of improving its own legitimacy by reducing the democratic and ideological deficit of the EU and advancing the development a collective European identity.
Resumo:
Democratic Legitimacy and the Politics of Rights is a research in normative political theory, based on comparative analysis of contemporary democratic theories, classified roughly as conventional liberal, deliberative democratic and radical democratic. Its focus is on the conceptual relationship between alternative sources of democratic legitimacy: democratic inclusion and liberal rights. The relationship between rights and democracy is studied through the following questions: are rights to be seen as external constraints to democracy or as objects of democratic decision making processes? Are individual rights threatened by public participation in politics; do constitutionally protected rights limit the inclusiveness of democratic processes? Are liberal values such as individuality, autonomy and liberty; and democratic values such as equality, inclusion and popular sovereignty mutually conflictual or supportive? Analyzing feminist critique of liberal discourse, the dissertation also raises the question about Enlightenment ideals in current political debates: are the universal norms of liberal democracy inherently dependent on the rationalist grand narratives of modernity and incompatible with the ideal of diversity? Part I of the thesis introduces the sources of democratic legitimacy as presented in the alternative democratic models. Part II analyses how the relationship between rights and democracy is theorized in them. Part III contains arguments by feminists and radical democrats against the tenets of universalist liberal democratic models and responds to that critique by partly endorsing, partly rejecting it. The central argument promoted in the thesis is that while the deconstruction of modern rationalism indicates that rights are political constructions as opposed to externally given moral constraints to politics, this insight does not delegitimize the politics of universal rights as an inherent part of democratic institutions. The research indicates that democracy and universal individual rights are mutually interdependent rather than oppositional; and that democracy is more dependent on an unconditional protection of universal individual rights when it is conceived as inclusive, participatory and plural; as opposed to robust majoritarian rule. The central concepts are: liberalism, democracy, legitimacy, deliberation, inclusion, equality, diversity, conflict, public sphere, rights, individualism, universalism and contextuality. The authors discussed are e.g. John Rawls, Jürgen Habermas, Seyla Benhabib, Iris Young, Chantal Mouffe and Stephen Holmes. The research focuses on contemporary political theory, but the more classical work of John S. Mill, Benjamin Constant, Isaiah Berlin and Hannah Arendt is also included.
Resumo:
This thesis is a collection of three essays on Bangladeshi microcredit. One of the essays examines the effect of microcredit on the cost of crime. The other two analyze the functioning mechanism of microcredit programs, i.e. credit allocation rules and credit recovery policy. In Essay 1, the demand for microcredit and its allocation rules is studied. Microcredit is claimed to be the most effective means of supplying credit to the poorest of the poor in rural Bangladesh. This fact has not yet been examined among households who demand microcredit. The results of this essay show that educated households are more likely to demand microcredit and its demand does not differ by sex. The results also show that microcredit programs follow different credit allocation rules for male and female applicants. Education is an essential characteristic for both sexes that credit programs consider in allocating credit. In Essay 2, the focus is to establish a link between microcredit and the incidence of rural crime in Bangladesh. The basic hypothesis is that microcredit programs jointly hold the group responsibility which provides an incentive for group members to protect each other from criminal gang in order to safeguard their own economic interests. The key finding of this essay is that the average cost of crime for non-borrowers is higher than that for borrowers. In particular, 10% increase in the credit reduces the costs of crime by 4.2%. The third essay analyzes the reasons of high repayment rate amid Bangladeshi microcredit programs. The existing literature argues that credit applicants are able to screen out the high risk applicants in the group formulation stage using their superior local information. In addition, due to the joint liability mechanism of the programs, group members monitor each others economic activities to ensure the minimal misuse of credit. The arguments in the literature are based on the assumption that once the credit is provided, credit programs have no further role in ensuring that repayments are honored by the group. In contrast, using survey data this essay documents that credit programs use in addition organizational pressures such as humiliation and harassment the non-payer to recover the unpaid installments. The results also show that the group mechanisms do not have a significant effect in recovering default dues.