21 resultados para implicit theories
em Helda - Digital Repository of University of Helsinki
Resumo:
This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.
Resumo:
Music as the Art of Anxiety: A Philosophical Approach to the Existential-Ontological Meaning of Music. The present research studies music as an art of anxiety from the points of view of both Martin Heidegger s thought and phenomenological philosophy in general. In the Heideggerian perspective, anxiety is understood as a fundamental mode of being (Grundbefindlichkeit) in human existence. Taken as an existential-ontological concept, anxiety is conceived philosophically and not psychologically. The central research questions are: what is the relationship between music and existential-ontological anxiety? In what way can music be considered as an art of anxiety? In thinking of music as a channel and manifestation of anxiety, what makes it a special case? What are the possible applications of phenomenology and Heideggerian thought in musicology? The main aim of the research is to develop a theory of music as an art of existential-ontological anxiety and to apply this theory to musicologically relevant phenomena. Furthermore, the research will contribute to contemporary musicological debates and research as it aims to outline the phenomenological study of music as a field of its own; the development of a specific methodology is implicit in these aims. The main subject of the study, a theory of music as an art of anxiety, integrates Heideggerian and phenomenological philosophies with critical and cultural theories concerning violence, social sacrifice, and mimetic desire (René Girard), music, noise and society (Jacques Attali), and the affect-based charme of music (Vladimir Jankélévitch). Thus, in addition to the subjective mood (Stimmung) of emptiness and meaninglessness, the philosophical concept of anxiety also refers to a state of disorder and chaos in general; for instance, to noise in the realm of sound and total (social) violence at the level of society. In this study, music is approached as conveying the existentially crucial human compulsion for signifying i.e., organizing chaos. In music, this happens primarily at the immediate level of experience, i.e. in affectivity, and also in relation to all of the aforementioned dimensions (sound, society, consciousness, and so on). Thus, music s existential-ontological meaning in human existence, Dasein, is in its ability to reveal different orders of existence as such. Indeed, this makes music the art of anxiety: more precisely, music can be existentially significant at the level of moods. The study proceeds from outlining the relevance of phenomenology and Heidegger s philosophy in musicology to the philosophical development of a theory of music as the art of anxiety. The theory is developed further through the study of three selected specific musical phenomena: the concept of a musical work, guitar smashing in the performance tradition of rock music, and Erik Bergman s orchestral work Colori ed improvvisazioni. The first example illustrates the level of individual human-subject in music as the art of anxiety, as a means of signifying chaos, while the second example focuses on the collective need to socio-culturally channel violence. The third example, being music-analytical, studies contemporary music s ability to mirror the structures of anxiety at the level of a specific musical text. The selected examples illustrate that, in addition to the philosophical orientation, the research also contributes to music analysis, popular music studies, and the cultural-critical study of music. Key words: music, anxiety, phenomenology, Martin Heidegger, ontology, guitar smashing, Erik Bergman, musical work, affectivity, Stimmung, René Girard
Resumo:
This study concentrates on the contested concept of pastiche in literary studies. It offers the first detailed examination of the history of the concept from its origins in the seventeenth century to the present, showing how pastiche emerged as a critical concept in interaction with the emerging conception of authorial originality and the copyright laws protecting it. One of the key results of this investigation is the contextualisation of the postmodern debate on pastiche. Even though postmodern critics often emphasise the radical novelty of pastiche, they in fact resuscitate older positions and arguments without necessarily reflecting on their historical conditions. This historical background is then used to analyse the distinction between the primarily French conception of pastiche as the imitation of style and the postmodern notion of it as the compilation of different elements. The latter s vagueness and inclusiveness detracts from its value as a critical concept. The study thus concentrates on the notion of stylistic pastiche, challenging the widespread prejudice that it is merely an indication of lack of talent. Because it is multiply based on repetition, pastiche is in fact a highly ambiguous or double-edged practice that calls into question the distinction between repetition and original, thereby undermining the received notion of individual unique authorship as a fundamental aesthetic value. Pastiche does not, however, constitute a radical upheaval of the basic assumptions on which the present institution of literature relies, since, in order to mark its difference, pastiche always refers to a source outside itself against which its difference is measured. Finally, the theoretical analysis of pastiche is applied to literary works. The pastiches written by Marcel Proust demonstrate how it can become an integral part of a writer s poetics: imitation of style is shown to provide Proust with a way of exploring the role of style as a connecting point between inner vision and reality. The pastiches of the Sherlock Holmes stories by Michael Dibdin, Nicholas Meyer and the duo Adrian Conan Doyle and John Dickson Carr illustrate the functions of pastiche within a genre detective fiction that is itself fundamentally repetitive. A.S. Byatt s Possession and D.M. Thomas s Charlotte use Victorian pastiches to investigate the conditions of literary creation in the age of postmodern suspicion of creativity and individuality. The study thus argues that the concept of pastiche has valuable insights to offer to literary criticism and theory, and that literary pastiches, though often dismissed in reviews and criticism, are a particularly interesting object of study precisely because of their characteristic ambiguity.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
Purpose This study focused on craft from a standpoint of phenomenological philosophy and craft was interpreted through Maurice Merleau-Ponty’s phenomenology of the body. The main focus was the physical phase of the craft process, wherein a product is made from material. The aim was to interpret corporality in craft. There is no former research focusing on lived body in craft science. Physical, bodily making is inalienable in craft, but it is not articulated. Recent discussion has focused on craft as ”whole”, which emphasizes designing part in the process, and craft becomes conceptualized with the theories of art and design. The axiomatic yet silenced basis of craft, corporality, deserves to become examined as well. That is why this study answers the questions: how craft manifests in the light of phenomenology of the body and what is corporality in craft? Methods In this study I cultivated a phenomenological attitude and turned my exploring eye on craft ”in itself”. In addition I restrained myself from mere making and placed myself looking at the occurrence of craft to describe it verbally. I read up Maurice Merleau-Ponty’s phenomenology of the body on his principal work (2002) and former interpretations of it. Interpreting and understanding textual data were based on Gadamer’s hermeneutics, and the four-pronged composition of the study followed Koski’s (1995) version of the Gadamerian process of textual interpretation. Conclusions In the construction of bodily phenomenology craft was to be contemplated as a mutual relationship between the maker and the world materializing in bodily making. At the moment of making a human being becomes one with his craft, and the connection between the maker, material and the equipment appears as communication. Operational dimension was distinctive in the intentionality of craft, which operates in many ways, also in craft products. The synesthesia and synergy of craft were emphasized and craft as bodily practice came to life through them. The moment of making appeared as situation generating time and space, where throwing oneself into making may give the maker an experience of upraise beyond the dualism of mind and body. The conception of the implicit nature of craft knowledge was strengthened. In the light of interpretation it was possible to conceptualize craft as a performance and making ”in itself” as a work of art. In that case craft appeared as bodily expression, which as an experience approaches art without being it after all. The concept of aesthetic was settled into making as well. Bodily and phenomenological viewpoint on craft gave material to critically contemplate the concept of “whole craft” (kokonainen käsityö) and provided different kind of understanding of craft as making.
Resumo:
Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.
Resumo:
Democratic Legitimacy and the Politics of Rights is a research in normative political theory, based on comparative analysis of contemporary democratic theories, classified roughly as conventional liberal, deliberative democratic and radical democratic. Its focus is on the conceptual relationship between alternative sources of democratic legitimacy: democratic inclusion and liberal rights. The relationship between rights and democracy is studied through the following questions: are rights to be seen as external constraints to democracy or as objects of democratic decision making processes? Are individual rights threatened by public participation in politics; do constitutionally protected rights limit the inclusiveness of democratic processes? Are liberal values such as individuality, autonomy and liberty; and democratic values such as equality, inclusion and popular sovereignty mutually conflictual or supportive? Analyzing feminist critique of liberal discourse, the dissertation also raises the question about Enlightenment ideals in current political debates: are the universal norms of liberal democracy inherently dependent on the rationalist grand narratives of modernity and incompatible with the ideal of diversity? Part I of the thesis introduces the sources of democratic legitimacy as presented in the alternative democratic models. Part II analyses how the relationship between rights and democracy is theorized in them. Part III contains arguments by feminists and radical democrats against the tenets of universalist liberal democratic models and responds to that critique by partly endorsing, partly rejecting it. The central argument promoted in the thesis is that while the deconstruction of modern rationalism indicates that rights are political constructions as opposed to externally given moral constraints to politics, this insight does not delegitimize the politics of universal rights as an inherent part of democratic institutions. The research indicates that democracy and universal individual rights are mutually interdependent rather than oppositional; and that democracy is more dependent on an unconditional protection of universal individual rights when it is conceived as inclusive, participatory and plural; as opposed to robust majoritarian rule. The central concepts are: liberalism, democracy, legitimacy, deliberation, inclusion, equality, diversity, conflict, public sphere, rights, individualism, universalism and contextuality. The authors discussed are e.g. John Rawls, Jürgen Habermas, Seyla Benhabib, Iris Young, Chantal Mouffe and Stephen Holmes. The research focuses on contemporary political theory, but the more classical work of John S. Mill, Benjamin Constant, Isaiah Berlin and Hannah Arendt is also included.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
Arguments arising from quantum mechanics and gravitation theory as well as from string theory, indicate that the description of space-time as a continuous manifold is not adequate at very short distances. An important candidate for the description of space-time at such scales is provided by noncommutative space-time where the coordinates are promoted to noncommuting operators. Thus, the study of quantum field theory in noncommutative space-time provides an interesting interface where ordinary field theoretic tools can be used to study the properties of quantum spacetime. The three original publications in this thesis encompass various aspects in the still developing area of noncommutative quantum field theory, ranging from fundamental concepts to model building. One of the key features of noncommutative space-time is the apparent loss of Lorentz invariance that has been addressed in different ways in the literature. One recently developed approach is to eliminate the Lorentz violating effects by integrating over the parameter of noncommutativity. Fundamental properties of such theories are investigated in this thesis. Another issue addressed is model building, which is difficult in the noncommutative setting due to severe restrictions on the possible gauge symmetries imposed by the noncommutativity of the space-time. Possible ways to relieve these restrictions are investigated and applied and a noncommutative version of the Minimal Supersymmetric Standard Model is presented. While putting the results obtained in the three original publications into their proper context, the introductory part of this thesis aims to provide an overview of the present situation in the field.
Local numerical modelling of magnetoconvection and turbulence - implications for mean-field theories
Resumo:
During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
Ympäristöasiantuntijoiden vuorovaikutusta on tutkittu agoralla (antiikin tori). Se on julkinen tila, jossa markkinat, politiikka, tiede ja yhteiskunta kohtaavat. Tutkimus kuuluu yhteiskuntatieteellisen ympäristötutkimuksen alaan, mutta siinä hyödynnetään myös tulevaisuudentutkimusta. Työn motivaationa on ollut tekijän monitieteinen koulutustausta: yhteiskuntatieteilijä ja luonnontieteilijä. Miten ja miksi vuorovaikutus eri asiantuntijoiden välillä on haasteellista ja merkityksellistä esimerkiksi metsän biodiversiteetin vähenemisen ehkäisemiseksi. Keskeisiä käsitteitä ovat asiantuntijuus, vuorovaikutus, tiedon luotettavuusja kontekstisidonnaisuus, Väitöskirja koostuu neljästä eri asiantuntijuustarinasta. Ensimmäinen (luku 2) perustuu haastatteluihin suomalaisten ja saksalaisten bio- ja yhteiskuntatietelijöiden käsityksistä luonnosta ja ympäristöstä. Tutkimusongelmana on luonnontieteilijöiden ja yhteiskuntatieteilijöiden Suomessa ja Saksassa ”kulttuurierot” luonnon ja ympäristön käsitteellistämisessä. Johtopäätöksenä on, että aistittu luonto, ympäröivä ympäristö sekä ihmisen muokkaama elinympäristö eivät tunne selkeitä tiede- eikä maanrajoja. Tämä luku toimii ponnahduslautana konstruktioiden taakse vuorovaikutuksen haasteisiin. Kirjan toinen tarina (luku 3) perustuu haastatteluihin suomalaisten metsän biodiversiteettiasiantuntijoiden vuorovaikutuksesta. Tutkimusongelman lähtökohtana on metsän biodiversiteetin väheneminen ja tästä seuraavat polittisetkin vuorovaikutustilanteet. Miten konteksti vaikuttaa eri asiantuntijoiden vuorovaikutukseen ja mitä tästä seuraa? Analyysin päätulos on implisiittisen, vahvasti kontekstisidonnaisen asiantuntijatiedon hyödyntämisen tarve ja voimavara metsän biodiversiteetin vähenemisen ennaltaehkäisemiksi. Kolmas tarina asiantuntijuudesta (luku 4) perustuu Etelä-Suomen metsien suojelutoimikunnassa (Metso) tehtyihin havainnointeihin. Tutkija on näin ollut itse eräänlaisella torilla havainnoijana. Tutkimusongelmana on ”ohipuhuminen”, tiedon luotettavuus ja implisiittien tiedon hyväksyttävyys. Johtopäätöksenä on asiantuntijuuden vahva kontekstisidonnaisuus hetkeen ja paikkaan ja yhteisen kielen (vrt. transdisiplinaarisuus) löytyminen yhteisen tavoitteen saavuttamiseksi. Merkittäviä välineitä vuorovaikutuksen onnistumiseen ovat esimerkiksi yhteinen vahva tavoitetila, interkatio, joka koskee läsnä olevia ihmisiä ei instituutioita sekä fasilitaattorin vahva rooli tulkkina ja välittäjänä. Neljäs tarina (luku 5) vie agoran konkretiaan. Tässä luvussa on kehitetty eläytymiskävely- menetelmää, jossa fasilitaattori (tutkija) johdattaa Espoon keskuksessa hallinnon, politiikan, asukkaiden ja konsultin edustajat aistimaan ja tulkitsemaan alueen sosiaalista tilaa, toiminnallisuutta ja elämyksellisyyttä. Ongelmana on aistimaailman asiantuntemuksen hyödyntämättömyys yhdyskuntasuunnittelun välineenä mm. asiantuntijoiden vuorovaikutuksen välineenä. Menetelmäkehitys on aluillaan, mutta jo tässä tapauksessa käy ilmi, että jaettu tila, jaetut aistikokemukset konkreettisella kävelyllä avaavaat vuorovaikutuksen uusiin ulottuvuuksiin, jossa implisiittiselle asiantuntemukselle annetaan sijansa vuorovaikutuksessa ja tätä kautta voidaan vaikuttaa myös tehtäviin päätöksiin, toimenpiteisiin. Johtopäätöksissä (luku 6) korostuu implisiittisen asiantuntijuuden merkitys. Onnistunut vuorovaikutteinen toiminta eri asiantuntijoiden kesken esimerkiksi erilaisia ympäristöongelmia –ja ilmiöitä ratkottaessa ja pohdittaessa vaatii vuorovaikutusosaamista. Tutkimuksen lopuksi suositellaan esimerkiksi ennakkoluulottomia avauksia agoralla. Asiantuntijuus ei ole yksi ja vain asiaatuntevuus on mahdollista. Agora on jatkuvassa liikkeessä ja juuri siinä piilee voimavara tulevaisuuden haasteisiin erilaisilla rajapinnoilla. Avainsanat: asiantuntijuus, vuorovaikutus, tieto, konteksti, agora
Resumo:
This study examines different ways in which the concept of media pluralism has been theorized and used in contemporary media policy debates. Access to a broad range of different political views and cultural expressions is often regarded as a self-evident value in both theoretical and political debates on media and democracy. Opinions on the meaning and nature of media pluralism as a theoretical, political or empirical concept, however, are many, and it can easily be adjusted to different political purposes. The study aims to analyse the ambiguities surrounding the concept of media pluralism in two ways: by deconstructing its normative roots from the perspective of democratic theory, and by examining its different uses, definitions and underlying rationalities in current European media policy debates. The first part of the study examines the values and assumptions behind the notion of media pluralism in the context of different theories of democracy and the public sphere. The second part then analyses and assesses the deployment of the concept in contemporary European policy debates on media ownership and public service media. Finally, the study critically evaluates various attempts to create empirical indicators for measuring media pluralism and discusses their normative implications and underlying rationalities. The analysis of contemporary policy debates indicates that the notion of media pluralism has been too readily reduced to an empty catchphrase or conflated with consumer choice and market competition. In this narrow technocratic logic, pluralism is often unreflectively associated with quantitative data in a way that leaves unexamined key questions about social and political values, democracy, and citizenship. The basic argument advanced in the study is that media pluralism needs to be rescued from its depoliticized uses and re-imagined more broadly as a normative value that refers to the distribution of communicative power in the public sphere. Instead of something that could simply be measured through the number of media outlets available, the study argues that media pluralism should be understood in terms of its ability to challenge inequalities in communicative power and create a more democratic public sphere.
Resumo:
A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account (Greene & McKoon, 1995; Koornneef & Van Berkum, 2006; Van Berkum, Koornneef, Otten, & Nieuwland, 2007). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.