62 resultados para Assumptions
Resumo:
Designed by the Media The Media publicity of Design in the Finnish Economic Press The meaning of design has increased in consumer societies. Design is the subject of debate and the number of media discussions has also increased steadily. Especially the role of industrial design has been emphasised. In this study I examine the media publicity of design in the Finnish economic press from the late 1980s to the beginning of the 2000s. The research question is connected to media representations: How is design represented in the Finnish economic press? In other words, what are the central topics of design in the economic press, and to what issues are the media debates connected? The usually repeated phrase that design discussions take place only on the cultural pages of the daily press or in cultural contexts is being changed. Design is also linked to the consumer culture and consumers everyday practices. The research material has been collected from the Finnish economic press. The qualitative sample consists of articles from Kauppalehti, Taloussanomat and from several economic papers published by the Talentum Corporation. The approach of the research is explorative, descriptive and hermeneutic. This means that the economic press articles are used to explore how design is represented in the media. In addition, the characteristics of design represented in the media are described in detail. The research is based on the interpretive tradition of studying textual materials. Background assumptions are thus grounded in hermeneutics. Erving Goffman s frame analysis is applied in analysing the economic press materials. The frames interpreted from the articles depict the media publicity of design in the Finnish economic press. The research opens up a multidimensional picture of design in the economic press. The analysis resulted in five frames that describe design from various points of view. In the personal frame designers are described in private settings and through their personal experiences. The second frame relates to design work. In the frame of mastery of the profession, the designers work is interpreted widely. Design is considered from the aspects of controlling personal know-how, co-operation and the overall process of design. The third frame is connected to the actual substance of the economic press. In the frame of economy and market, design is linked to international competitiveness, companies competitive advantage and benefit creation for the consumers. The fourth frame is connected to the actors promoting design on a societal level. In the communal frame, the economic press describes design policy, design research and education and other actors that actively develop design in the societal networks. The last frame is linked to the traditions of design and above all to the examination of the cultural transition. In the frame of culture the traditions of design are emphasised. Design is also connected to the industrial culture and furthermore to the themes of the consumer culture. It can be argued that the frames construct media publicity of design from various points of view. The frames describe situations, action and the actors of design. The interpreted media frames make it possible to understand the relation of interpreted design actions and the culture. Thus, media has a crucial role in representing and recreating meanings related to design. The publicity of design is characterised by the five focal themes: personification, professionalisation, commercialisation, communalisation and transition of cultural focus from the traditions of design to the industrial culture and the consumer culture. Based on my interpretation these themes are guided by the mediatisation of design. The design phenomenon is defined more often on the basis of the media representations in the public discourses. The design culture outlined in this research connects socially constructed and structurally organised action. Socially constructed action in design is connected to the experiences, social recreation and collective development of design. Structurally, design is described as professional know-how, as a process and as an economic profit generating action in the society. The events described by the media affect the way in which people experience the world, the meanings they connect to the events around themselves and their life in the world. By affecting experiences, the media indirectly affects human actions. People have become habituated to read media representations on a daily basis, but they are not used to reading and interpreting the various meanings that are incorporated in the media texts.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
The topic of this dissertation is the geometric and isometric theory of Banach spaces. This work is motivated by the known Banach-Mazur rotation problem, which asks whether each transitive separable Banach space is isometrically a Hilbert space. A Banach space X is said to be transitive if the isometry group of X acts transitively on the unit sphere of X. In fact, some weaker symmetry conditions than transitivity are studied in the dissertation. One such condition is an almost isometric version of transitivity. Another investigated condition is convex-transitivity, which requires that the closed convex hull of the orbit of any point of the unit sphere under the rotation group is the whole unit ball. Following the tradition developed around the rotation problem, some contemporary problems are studied. Namely, we attempt to characterize Hilbert spaces by using convex-transitivity together with the existence of a 1-dimensional bicontractive projection on the space, and some mild geometric assumptions. The convex-transitivity of some vector-valued function spaces is studied as well. The thesis also touches convex-transitivity of Banach lattices and resembling geometric cases.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
Resumo:
This study examines philosophically the main theories and methodological assumptions of the field known as the cognitive science of religion (CSR). The study makes a philosophically informed reconstruction of the methodological principles of the CSR, indicates problems with them, and examines possible solutions to these problems. The study focuses on several different CSR writers, namely, Scott Atran, Justin Barrett, Pascal Boyer and Dan Sperber. CSR theorising is done in the intersection between cognitive sciences, anthropology and evolutionary psychology. This multidisciplinary nature makes CSR a fertile ground for philosophical considerations coming from philosophy of psychology, philosophy of mind and philosophy of science. The study begins by spelling out the methodological assumptions and auxiliary theories of CSR writers by situating these theories and assumptions in the nexus of existing approaches to religion. The distinctive feature of CSR is its emphasis on information processing: CSR writers claim that contemporary cognitive sciences can inform anthropological theorising about the human mind and offer tools for producing causal explanations. Further, they claim to explain the prevalence and persistence of religion by cognitive systems that undergird religious thinking. I also examine the core theoretical contributions of the field focusing mainly on the (1) “minimally counter-intuitiveness hypothesis” and (2) the different ways in which supernatural agent representations activate our cognitive systems. Generally speaking, CSR writers argue for the naturalness of religion: religious ideas and practices are widespread and pervasive because human cognition operates in such a way that religious ideas are easy to acquire and transmit. The study raises two philosophical problems, namely, the “problem of scope” and the “problem of religious relevance”. The problem of scope is created by the insistence of several critics of the CSR that CSR explanations are mostly irrelevant for explaining religion. Most CSR writers themselves hold that cognitive explanations can answer most of our questions about religion. I argue that the problem of scope is created by differences in explanation-begging questions: the former group is interested in explaining different things than the latter group. I propose that we should not stick too rigidly to one set of methodological assumptions, but rather acknowledge that different assumptions might help us to answer different questions about religion. Instead of adhering to some robust metaphysics as some strongly naturalistic writers argue, we should adopt a pragmatic and explanatory pluralist approach which would allow different kinds of methodological presuppositions in the study of religion provided that they attempt to answer different kinds of why-questions, since religion appears to be a multi-faceted phenomenon that spans over a variety of fields of special sciences. The problem of religious relevance is created by the insistence of some writers that CSR theories show religious beliefs to be false or irrational, whereas others invoke CSR theories to defend certain religious ideas. The problem is interesting because it reveals the more general philosophical assumptions of those who make such interpretations. CSR theories can (and have been) interpreted in terms of three different philosophical frameworks: strict naturalism, broad naturalism and theism. I argue that CSR theories can be interpreted inside all three frameworks without doing violence to the theories and that these frameworks give different kinds of results regarding the religious relevance of CSR theories.
Resumo:
What is a miracle and what can we know about miracles? A discussion of miracles in anglophone philosophy of religion literature since the late 1960s. The aim of this study is to systematically describe and philosophically examine the anglophone discussion on the subject of miracles since the latter half of the 1960s. The study focuses on two salient questions: firstly, what I will term the conceptual-ontological question of the extent to which we can understand miracles and, secondly, the epistemological question of what we can know about miracles. My main purpose in this study is to examine the various viewpoints that have been submitted in relation to these questions, how they have been argued and on what presuppositions these arguments have been based. In conducting the study, the most salient dimension of the various discussions was found to relate to epistemological questions. In this regard, there was a notable confrontation between those scholars who accept miracles and those who are sceptical of them. On the conceptual-ontological side I recognised several different ways of expressing the concept of miracle . I systematised the discussion by demonstrating the philosophical boundaries between these various opinions. The first and main boundary was related to ontological knowledge. On one side of this boundary I placed the views which were based on realism and objectivism. The proponents of this view assumed that miraculousness is a real property of a miraculous event regardless of how we can perceive it. On the other side I put the views which tried to define miraculousness in terms of subjectivity, contextuality and epistemicity. Another essential boundary which shed light on the conceptual-ontological discussion was drawn in relation to two main views of nature. The realistic-particularistic view regards nature as a certain part of reality. The adherents of this presupposition postulate a supernatural sphere alongside nature. Alternatively, the nominalist-universalist view understands nature without this kind of division. Nature is understood as the entire and infinite universe; the whole of reality. Other, less important boundaries which shed light on the conceptual-ontological discussion were noted in relation to views regarding the laws of nature, for example. I recognised that the most important differences between the epistemological approaches were in the different views of justification, rationality, truth and science. The epistemological discussion was divided into two sides, distinguished by their differing assumptions in relation to the need for evidence. Adherents of the first (and noticeably smaller) group did not see any epistemological need to reach a universal and common opinion about miracles. I discovered that these kinds of views, which I called non-objectivist, had subjectivist and so-called collectivist views of justification and a contextualist view of rationality. The second (and larger) group was mainly interested in discerning the grounds upon which to establish an objective and conclusive common view in relation to the epistemology of miracles. I called this kind of discussion an objectivist discussion and this kind of approach an evidentialist approach. Most of the evidentialists tried to defend miracles and the others attempted to offer evidence against miracles. Amongst both sides, there were many different variations according to emphasis and assumption over how they saw the possibilities to prove their own view. The common characteristic in all forms of evidentialism was a commitment to an objectivist notion of rationality and a universalistic notion of justification. Most evidentialists put their confidence in science in one way or another. Only a couple of philosophers represented the most moderate version of evidentialism; they tried to remove themselves from the apparent controversy and contextualised the different opinions in order to make some critical comments on them. I called this kind of approach a contextualising form of evidentialism. In the final part of the epistemological chapter, I examined the discussion about the evidential value of miracles, but nothing substantially new was discovered concerning the epistemological views of the authors.
Resumo:
The purpose of this dissertation is to analyze and explicate the ideological content, which is often implicit, in the health care rationing discussion. The phrase "ideological content" refers to viewpoints and assumptions expressed in the rationing discussion that may be widespread and accepted, but without clear evidential support. The study method is philosophical text analysis. The study begins by exploring the literature from the 1970s that affects the present-day rationing discussion. Since ideological contents may have different emphases in realm of health care, three representative cases were studied. The first was a case study of the first and best-known rationing experiment in the American state of Oregon, namely, an experimental rationing plan within the public health program Medicaid, which is designed to provide care for the poor and underprivileged. The second was a study of the only national-level public priority setting that has been conducted in New Zealand. The third examined the Finnish Care Guarantee plan introduced in March 2005. The findings show that several problematic and scientifically mostly unproven concepts have remained largely uncontested in the debate about public health care rationing. Some of these notions already originated decades ago in studies that relied on outdated data or research paradigms. The problematic ideological contents have also been taken up from one publication into another, thereby affecting the rationing debate. The study suggests that before any new public health care rationing experiments are undertaken, these ideological factors should be properly examined, especially in order to avoid repetitious research and perhaps erroneous rationing decisions.
Resumo:
This work combines the cognitive theory of folk-theoretical thought with the classical Aristotelian theory of artistic proof in rhetoric. The first half of the work discusses the common ground shared by the elements of artistic proof (logos, pathos, ethos) and the elements of folk-theoretical thought (naïve physics, folk biology, folk psychology, naïve sociology). Combining rhetoric with the cognitive theory of folk-theoretical thought creates a new point of view for argumentation analysis. The logos of an argument can be understood as the inferential relations established between the different parts of an argument. Consequently, within this study the analysis of logos is to be viewed as the analysis of the inferential folk-theoretical elements that make the suggested factual states-of-things appear plausible within given argumentative structures. The pathos of an argumentative structure can be understood as determining the quality of the argumentation in question in the sense that emotive elements play a great part in what can be called a distinction between good and deceptive rhetoric. In the context of this study the analysis of pathos is to be viewed as the analysis of the emotive content of argumentative structures and of whether they aim at facilitating surface- or deep cognitive elaboration of the suggested matters. The ethos of an argumentative structure means both the speaker-presentation and audience-construct that can be discerned within a body of argumentation. In the context of this study, the analysis of ethos is to be understood as the analysis of mutually manifest cognitive environments in the context of argumentation. The theory is used to analyse Catholic Internet discussion concerning cloning. The discussion is divided into six themes: Human Dignity, Sacred Family, Exploitation / Dehumanisation, Playing God, Monsters and Horror Scenarios and Ensoulment. Each theme is analysed for both the rhetorical and the cognitive elements that can be seen creating persuasive force within the argumentative structures presented. It is apparent that the Catholic voices on the Internet extensively oppose cloning. The voices utilise rhetoric that is aggressive and pejorative more often than not. Furthermore, deceptive rhetoric (in the sense presented above) plays a great part in argumentative structures of the Catholic voices. The theory of folk-theoretical thought can be seen as a useful tool for analysing the possible reasons why the Catholic speakers think about cloning and choose to present cloning in their argumentation as they do. The logos utilized in the argumentative structures presented can usually be viewed as based on folk-theoretical inference concerning biology and psychology. The structures of pathos utilized generally appear to aim at generating fear appeal in the assumed audiences, often incorporating counter-intuitive elements. The ethos utilised in the arguments generally revolves around Christian mythology and issues of social responsibility. These structures can also be viewed from the point of view of folk psychology and naïve sociological assumptions.
Resumo:
According to some scientists it is not useful to integrate ethics into research practices. Their claim is that only unethical persons have ethical problems and because of this we must accept ethical misbehaviour as a phenomenon typical of human society. In the present study the argument that the moral personality of scientists explains ethical problems in science is questioned; in addition, the focus is shifted from individuals to the level of the research environment. The question asked is whether the research environment somehow contributes to research ethics violations. To answer this question the focus was turned towards the research environment norms. The aim of the study was to investigate whether or not these norms are consistent with the norms of research ethics, so that it would be possible to evaluate if the research environment supports scientists in their task of meeting the ethical standards of scientific research. In the study the research environment was examined in three parts. The first deals with society especially Finnish society as a research environment. The second deals with the autonomous science institution as a research environment, while the third deals with scientific society (working according to scientific criteria) as a research environment. The conceptual analysis method was used. This means that various normative arguments were analysed, the primary assumptions behind them were recognized, and the acceptability of normative claims was evaluated according to their consistency. The results of the study do not support the claim that ethical violations in science could be satisfactorily explained by referring only to the personal qualities of scientists. The research environment can limit the freedom to follow the ethical principles of science, it can prevent scientists from handling ethical problems openly and from integrating ethical norms effectively into research practices. The norms of research environment are often implicit but nevertheless influence scientific practices. Further, the results indicate that handling ethical questions should be a part of scientific training.
Resumo:
In this research, the cooperation between Finnish municipalities and Evangelical Lutheran parishes is studied from the standpoint of institutional interaction. The most essential theoretical background for the study is the differentiation thesis of the secularization theory. Cooperation from the viewpoints of both organizations is examined using the functional approach. Furthermore, the market theory and other theories are applied in order to place the studied phenomenon in the wider context of the theories of the sociology of religion. Sacralization in modern society and its relationship with the differentiation thesis of the secularization theory are in the theoretical foci. In addition, along with a descriptive examination of cooperation, the normative sides of the phenomenon are discussed. The survey was conducted among all municipalities and parishes in continental Finland. The questionnaires were sent to all municipal managers of youth work and afternoon activities and to all managers of child, youth and social work in the parishes. The response rate for the municipalities was 73.9 % and for the parishes 69.5 %. In addition, two qualitative data were utilized. The aim of the study is to scrutinize what kind of limitations of differentiation can be caused by the interaction between the secular and the religious. In order to solve the problem, an empirical study of sacralization in the modern context is required. For this purpose, the survey was carried out to determine the effects of the religious on the secular and the impact of the secular on the religious. In the articles of the study the following relationships are discussed: the positions of municipalities and parishes in relation to the state and civil society; cooperation in relation to differentiation; sacralization in relation to the differentiation thesis and cooperation in relation to pluralism. The results of the study highlighted the significance of the cooperation, which was contrary to the secularization theory connected to religious sacralization. The acceptance of the appearance of religion in cooperation and parishes support for municipal function was high in municipalities. Religious cooperation was more active than secular cooperation within all fields. This was also true between fields: religiously orientated child work was more active than the societally orientated social work of the church. Religious cooperation in modern fields of activity underlined sacralization. However, the acceptance of sacralization was weaker in cities than rural areas. Positive relationships between the welfare function of municipalities and the religious function of parishes emphasized the incompleteness of differentiation and the importance of sacralization. The relationship of the function of municipalities with parishes was neither negative nor neutral. Thus, in the most active fields, that is, child work and the traditional social work of the church, the orientation of parishes in cooperation supported the functions of both organizations. In more passive fields, that is, youth work and the societal social work of the church, parishes were orientated towards supporting the municipal function. The orientation of municipalities to religion underlined the perception that religious function is necessary for cooperation. However, the official character of cooperation supported accommodation to the requirements of societal pluralism. According to the results, sacralization can be effective also at the institutional level. The religious effect of voluntary cooperation means that religious sacralization can also readjust to modern society. At the same time, the results of the study stressed the importance of institutional autonomy. Thus, the public sector has a central role in successful cooperation. The conditions of cooperation are weakened if there is no official support of cooperation or adjustment to the individual rights of modern society. The results called into question the one-directional assumptions in the secularization paradigm and the modernization theory in the background. In these assumptions, religion that represents the traditional is seen to give way to the modern, especially at the institutional level. Lack of an interactional view was identified as a central weakness of the secularization paradigm. In the theoretical approach created in the study, an interactional view between religious and secular institutions was made possible by limiting the core of the differentiation thesis to autonomy. The counter forces of differentiation are despecialization and sacralization. These changes in the secularization theory bring about new interactivity on the institutional level. In addition to the interactional approach, that is, the secularization and sacralization theory created as a synthesis of the study, interaction between the religious and the secular is discussed from the standpoint of multiple modernities. The spiritual welfare role of religion is seen as a potential supporter of secular institutions. Religion is set theoretically amongst other ideologies and agents, which can create communal bonds in modern society. Key words: cooperation, municipalities, parishes, sacralization, secularization, modernization, multiple modernities, differentiation, interaction, democracy, secularism, pluralism, civil society
Resumo:
This work is concerned with presenting a modified theoretical approach to the study of centre-periphery relations in the Russian Federation. In the widely accepted scientific discourse, the Russian federal system under the Yeltsin Administration (1991-2000) was asymmetrical; largely owing to the varying amount of structural autonomy distributed among the federation s 89 constituent units. While providing an improved understanding as to which political and socio-economic structures contributed to federal asymmetry, it is felt that associated large N-studies have underemphasised the role played by actor agency in re-shaping Russian federal institutions. It is the main task of this thesis to reintroduce /re-emphasise the importance of actor agency as a major contributing element of institutional change in the Russian federal system. By focusing on the strategic agency of regional elites simultaneously within regional and federal contexts, the thesis adopts the position that political, ethnic and socio-economic structural factors alone cannot fully determine the extent to which regional leaders were successful in their pursuit of economic and political pay-offs from the institutionally weakened federal centre. Furthermore, this work hypothesises that under conditions of federal institutional uncertainty, it is the ability of regional leaders to simultaneously interpret various mutable structural conditions then translate them into plausible strategies which accounts for the regions ability to extract variable amounts of economic and political pay-offs from the Russian federal system. The thesis finds that while the hypothesis is accurate in its theoretical assumptions, several key conclusions provide paths for further inquiry posed by the initial research question. First, without reliable information or stable institutions to guide their actions, both regional and federal elites were forced into ad-hoc decision-making in order to maintain their core strategic focus: political survival. Second, instead of attributing asymmetry to either actor agency or structural factors exclusively, the empirical data shows that both agency and structures interact symbiotically in the strategic formulation process, thus accounting for the sub-optimal nature of several of the actions taken in the adopted cases. Third, as actor agency and structural factors mutate over time, so, too do the perceived payoffs from elite competition. In the case of the Russian federal system, the stronger the federal centre became, the less likely it was that regional leaders could extract the high degree of economic and political pay-offs that they clamoured for earlier in the Yeltsin period. Finally, traditional approaches to the study of federal systems which focus on institutions as measures of federalism are not fully applicable in the Russian case precisely because the institutions themselves were a secondary point of contention between competing elites. Institutional equilibriums between the regions and Moscow were struck only when highly personalised elite preferences were satisfied. Therefore the Russian federal system is the product of short-term, institutional solutions suited to elite survival strategies developed under conditions of economic, political and social uncertainty.