17 resultados para collective memory work
em Aston University Research Archive
Resumo:
The focus of this article is the process of doing memory-work research. We tell the story of our experience of what it was like to use this approach. We were enthused to work collectively on a "discovery" project to explore a method with which we were unfamiliar. We hoped to build working relationships based on mutual respect and the desire to focus on methodology and its place in our psychological understanding. The empirical activities highlighted methodological and experiential challenges, which tested our adherence to the social constructionist premise of Haug's original description of memory work. Combined with practical difficulties of living across Europe, writing and analyzing the memories became contentious. We found ourselves having to address a number of tensions emanating from the work and our approach to it. We discuss some of these tensions alongside examples that illustrate the research process and the ways we negotiated the collective nature of the memory-work approach. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
This article analyzes the role of expert witness testimony in the trials of social movement actors, discussing the trial of the "Kingsnorth Six" in Britain and the trials of activists currently mobilising against airport construction at Notre Dame des Landes in western France. Though the study of expert testimony has so far overwhelmingly concentrated on fact-finding and admissibility, the cases here reveal the importance of expert testimony not simply in terms of legal argument, but in "moral" or political terms, as it reflects and constitutes movement cognitive praxis. In the so-called climate change defence presented by the Kingsnorth Six, I argue that expert testimony attained a "negotiation of proximity," connecting different types of contributory expertise to link the scales and registers of climate science with those of everyday understanding and meaning. Expert testimony in the trials of activists in France, however, whilst ostensibly able to develop similar bridging narratives, has instead been used to construct resistance to the airport siting as already proximate, material, and embedded. To explain this, I argue that attention to the symbolic, as well as instrumental, functions of expert testimony reveals the crucial role that collective memory plays in the construction of both knowledge and grievance in these cases. Collective memory is both a constraint on and catalyst for mobilisation, defining the boundaries of the sayable. Testimony in trials both reflects and reproduces these elements and is a vital explanatory tool for understanding the narrativisation and communication of movement identities and objectives. © 2013 The Author. Law & Policy © 2013 The University of Denver/Colorado Seminary.
Resumo:
A review of available literature suggests that social identification exists at the interface between individual and collective identity work. This poster proposes that it is the interaction between these two processes that leads a person to define themselves in terms of their membership of a particular social group. The poster suggests that identity work undertaken by the group (or ‘the creation of identities as widely understood signs with a set of rules and conventions for their use’, Schwalbe & Mason-Schrock, 1996, p.115), can be used by a person to inform their own individual identity work and, from this, the extent of alignment between their identity and the perceived identity of the group. In stable or internally-structured groups collective identity work may simply take the form of communication and preservation of dominant collective identities. However, in unstable, new or transitional groups, interaction between individual and collective identity work may be more dynamic, as both collective and individual identities are simultaneously codified, enacted and refined. To develop an understanding of social identification that is applicable in both stable and transitional social groups, it is useful to consider recent proposals that identification may occur cyclically as a series of discrete episodes (Ashforth, Harrison & Corley, 2008). This poster draws on the literature to present these suggestions in greater detail, outlining propositions for social identification that are relevant to transient as well as stable identity formation, supported by suggestion of how episodes of social identification may lead to a person identifying with a group.
Resumo:
This article considers young people’s socialization into mnemonic communities in 14 European countries. It argues that such socialization is an intersubjective and selective process that, to a great degree, depends on the particular social environment that conditions the discourses on pasts available to young people. Drawing on memory studies, it recognizes memory as a valid alternative to the institutionalized past (history) but envisages the two as inextricably connected. Given this, it identifies several strategies adopted by young people in order to socialize understandings of the past. While these strategies vary, some reveal receptivity to populist and far right ideologies. Our study demonstrates how internalization of political heritage via mnemonic socialization within families is conditioned by both the national political agenda and socio-economic situation experienced across Europe.
Resumo:
One of the key challenges that organizations face when trying to integrate knowledge across different functions is the need to overcome knowledge boundaries between team members. In cross-functional teams, these boundaries, associated with different knowledge backgrounds of people from various disciplines, create communication problems, necessitating team members to engage in complex cognitive processes when integrating knowledge toward a joint outcome. This research investigates the impact of syntactic, semantic, and pragmatic knowledge boundaries on a team’s ability to develop a transactive memory system (TMS)—a collective memory system for knowledge coordination in groups. Results from our survey show that syntactic and pragmatic knowledge boundaries negatively affect TMS development. These findings extend TMS theory beyond the information-processing view, which treats knowledge as an object that can be stored and retrieved, to the interpretive and practice-based views of knowledge, which recognize that knowledge (in particular specialized knowledge) is localized, situated, and embedded in practice.
Resumo:
Research on organizational spaces has not considered the importance of collective memory for the process of investing meaning in corporate architecture. Employing an archival ethnography approach, practices of organizational remembering emerge as a way to shape the meanings associated with architectural designs. While the role of monuments and museums are well established in studies of collective memory, this research extends the concept of spatiality to the practices of organizational remembering that focus on a wider selection of corporate architecture. By analyzing the historical shift from colonial to modernist architecture for banks and retailers in Ghana and Nigeria in the 1950s and 1960s on the basis of documents and photographs from three different companies, this article shows how archival sources can be used to untangle the ways in which companies seek to ascribe meaning to their architectural output. Buildings allude to the past and the future in a range of complex ways that can be interpreted more fully by reference to the archival sources and the historical context of their creation. Social remembering has the potential to explain why and how buildings have meaning, while archival ethnography offers a new research approach to investigate changing organizational practices.
Resumo:
Recent functional magnetic resonance imaging (fMRI) investigations of the interaction between cognition and reward processing have found that the lateral prefrontal cortex (PFC) areas are preferentially activated to both increasing cognitive demand and reward level. Conversely, ventromedial PFC (VMPFC) areas show decreased activation to the same conditions, indicating a possible reciprocal relationship between cognitive and emotional processing regions. We report an fMRI study of a rewarded working memory task, in which we further explore how the relationship between reward and cognitive processing is mediated. We not only assess the integrity of reciprocal neural connections between the lateral PFC and VMPFC brain regions in different experimental contexts but also test whether additional cortical and subcortical regions influence this relationship. Psychophysiological interaction analyses were used as a measure of functional connectivity in order to characterize the influence of both cognitive and motivational variables on connectivity between the lateral PFC and the VMPFC. Psychophysiological interactions revealed negative functional connectivity between the lateral PFC and the VMPFC in the context of high memory load, and high memory load in tandem with a highly motivating context, but not in the context of reward alone. Physiophysiological interactions further indicated that the dorsal anterior cingulate and the caudate nucleus modulate this pathway. These findings provide evidence for a dynamic interplay between lateral PFC and VMPFC regions and are consistent with an emotional gating role for the VMPFC during cognitively demanding tasks. Our findings also support neuropsychological theories of mood disorders, which have long emphasized a dysfunctional relationship between emotion/motivational and cognitive processes in depression.
Resumo:
Exporting is one of the main ways in which organizations internationalize. With the more turbulent, heterogeneous, sophisticated and less familiar export environment, the organizational learning ability of the exporting organization may become its only source of sustainable competitive advantage. However, achieving a competitive level of learning is not easy. Companies must be able to find ways to improve their learning capability by enhancing the different aspects of the learning process. One of these is export memory. Building from an export information processing framework this research work particularly focuses on the quality of export memory, its determinants, its subsequent use in decision-making, and its ultimate relationship with export performance. Within export memory use, four export memory use dimensions have been discovered: instrumental, conceptual, legitimizing and manipulating. Results from the qualitative study based on the data from a mail survey with 354 responses reveal that the development of export memory quality is positively related with quality of export information acquisition, the quality of export information interpretation, export coordination, and integration of the information into the organizational system. Several company and environmental factors have also been examined in terms of their relationship with export memory use. The two factors found to be significantly related to the extent of export memory use are acquisition of export information quality and export memory quality. The results reveal that export memory quality is positively related to the extent of export memory use which in turn was found to be positively related to export performance. Furthermore, results of the study show that there is only one aspect of export memory use that significantly affects export performance – the extent of export memory use. This finding could mean that there is no particular type of export memory use favored since the choice of the type of use is situation specific. Additional results reveal that environmental turbulence and export memory overload have moderating effects on the relationship between export memory use and export performance.
Resumo:
A critical review of the auditory selective attention literature is presented, particular reference is made to methodological issues arising from the asymmetrical hemispheric representation of language in the context of the dominant research technique dichotic shadowing. Subsequently the concept of cerebral localization is introduced, and the experimental literature with reference to models of laterality effects in speech and audition discussed. The review indicated the importance of hemispheric asymmetries insofar as they might influence the results of dichotic shadowing tasks. It is suggested that there is a potential overlap between models of selective attention and hemispheric differences. In Experiment I, ~ a key experiment in auditory selective attention is replicated and by exercising control over possible laterality effects some of the conflicting results of earlier studies were reconciled. The three subsequent experiments, II, III and IV, are concerned with the recall of verbally shadowed inputs. A highly significant and consistent effect of ear of arrival upon the serial position of items recalled is reported. Experiment V is directed towards an analysis of the effect that the processing of unattended inputs has upon the serial position of attended items that are recalled. A significant effect of the type of unattended material upon the recall of attended items was found to be influenced by the ear of arrival of inputs. In Experiment VI, differences between the two ears as attended and unattended input channels were clarified. Two main conclusions were drawn from this work. First, that the dichotic shadowing technique cannot control attention. Instead the task aprocessing both channels of dichotic inputs is unevenly shared bet\'reen the hemispheres as a function of the ear shadowed. Consequently, evidence for the processing of unattended information is considered in terms of constraints imposed by asymmetries in the functional organization of language, not in terms of a limited processing capacity model. The second conclusion to be drawn is that laterality differences can be effectively examined using the dichotic shadowing technique, a new model of laterality differences is proposed and discussed.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
The aim of this project was to develop the education work of an environmental pressure group. The research devised and implemented a project to produce multi-media teaching packs on the urban environment. Whilst this involved understanding environmental education it was necessary to research beyond this to include the various structural and dynamic constraints on change in the field. This presented a number of methodological difficulties; from the resolution of which a model of the research process involved in this project has been developed. It is argued that research oriented towards practical change requires the insights of an experienced practitioner to be combined with the rigours of controlled systematic enquiry. Together these function as a model-building process encompassing intuition, induction and deduction. Model testing is carried out through repeated intervention in the field; thus an interplay between researcher and client ensues such that the project develops in a mutually acceptable direction. In practice, this development will be both unpredictable and erratic. Although the conclusions reached here are based on a single case study they address general methodological issues likely to be encountered in different field settings concerned with different practical problems.
Une étude générique du metteur en scène au théâtre:son émergence et son rôle moderne et contemporain
Resumo:
The theatre director (metteur en scene in French) is a relatively new figure in theatre practice. It was not until the I820s that the term 'mise en scene' gained currency. The term 'director' was not in general use until the I880s. The emergence and the role of the director has been considered from a variety of perspectives, either through the history of theatre (Allevy, Jomaron, Sarrazac, Viala, Biet and Triau); the history of directing (Chinoy and Cole, Boll, Veinstein, Roubine); semiotic approaches to directing (Whitmore, Miller, Pavis); the semiotics of performance (De Marinis); generic approaches to the mise en scene (Thomasseau, Banu); post-dramatic approaches to theatre (Lehmann); approaches to performance process and the specifics of rehearsal methodology (Bradby and Williams, Giannachi and Luckhurst, Picon-Vallin, Styan). What the scholarly literature has not done so far is to map the parameters necessarily involved in the directing process, and to incorporate an analysis of the emergence of the theatre director during the modem period and consider its impact on contemporary performance practice. Directing relates primarily to the making of the performance guided by a director, a single figure charged with the authority to make binding artistic decisions. Each director may have her/his own personal approaches to the process of preparation prior to a show. This is exemplified, for example, by the variety of terms now used to describe the role and function of directing, from producer, to facilitator or outside eye. However, it is essential at the outset to make two observations, each of which contributes to a justification for a generic analysis (as opposed to a genetic approach). Firstly, a director does not work alone, and cooperation with others is involved at all stages of the process. Secondly, beyond individual variation, the role of the director remains twofold. The first is to guide the actors (meneur de jeu, directeur d'acteurs, coach); the second is to make a visual representation in the performance space (set designer, stage designer, costume designer, lighting designer, scenographe). The increasing place of scenography has brought contemporary theatre directors such as Wilson, Castellucci, Fabre to produce performances where the performance space becomes a semiotic dimension that displaces the primacy of the text. The play is not, therefore, the sole artistic vehicle for directing. This definition of directing obviously calls for a definition of what the making of the performance might be. The thesis defines the making of the performance as the activity of bringing a social event, by at least one performer, providing visual and/or textual meaning in a performance space. This definition enables us to evaluate four consistent parameters throughout theatre history: first, the social aspect associated to the performance event; second, the devising process which may be based on visual and/or textual elements; third, the presence of at least one performer in the show; fourth, the performance space (which is not simply related to the theatre stage). Although the thesis focuses primarily on theatre practice, such definition blurs the boundaries between theatre and other collaborative artistic disciplines (cinema, opera, music and dance). These parameters illustrate the possibility to undertake a generic analysis of directing, and resonate with the historical, political and artistic dimensions considered. Such a generic perspective on the role of the director addresses three significant questions: an historical question: how/why has the director emerged?; a sociopolitical question: how/why was the director a catalyst for the politicisation of theatre, and subsequently contributed to the rise of State-funded theatre policy?; and an artistic one: how/why the director has changed theatre practice and theory in the twentieth-century? Directing for the theatre as an artistic activity is a historically situated phenomenon. It would seem only natural from a contemporary perspective to associate the activity of directing to the function of the director. This is relativised, however, by the question of how the performance was produced before the modern period. The thesis demonstrates that the rise of the director is a progressive and historical phenomenon (Dort) rather than a mere invention (Viala, Sarrazac). A chronological analysis of the making of the performance throughout theatre history is the most useful way to open the study. In order to understand the emergence of the director, the research methodology assesses the interconnection of the four parameters above throughout four main periods of theatre history: the beginning of the Renaissance (meneur de jeu), the classical age (actor-manager and stage designer-manager), the modern period (director) and the contemporary period (director-facilitator, performer). This allows us properly to appraise the progressive emergence of the director, as well as to make an analysis of her/his modern and contemporary role. The first chapter argues that the physical separation between the performance space and its audience, which appeared in the early fifteenth-century, has been a crucial feature in the scenographic, aesthetic, political and social organisation of the performance. At the end of the Middle Ages, French farces which raised socio-political issues (see Bakhtin) made a clear division on a single outdoor stage (treteau) between the actors and the spectators, while religious plays (drame fiturgique, mystere) were mostly performed on various outdoor and opened multispaces. As long as the performance was liturgical or religious, and therefore confined within an acceptable framework, it was allowed. At the time, the French ecclesiastical and civil authorities tried, on several occasions, to prohibit staged performances. As a result, practitioners developed non-official indoor spaces, the Theatre de fa Trinite (1398) being the first French indoor theatre recognized by scholars. This self-exclusion from the open public space involved breaking the accepted rules by practitioners (e.g. Les Confreres de fa Passion), in terms of themes but also through individual input into a secular performance rather than the repetition of commonly known religious canvases. These developments heralded the authorised theatres that began to emerge from the mid-sixteenth century, which in some cases were subsidised in their construction. The construction of authorised indoor theatres associated with the development of printing led to a considerable increase in the production of dramatic texts for the stage. Profoundly affecting the reception of the dramatic text by the audience, the distance between the stage and the auditorium accompanied the changing relationship between practitioners and spectators. This distance gave rise to a major development of the role of the actor and of the stage designer. The second chapter looks at the significance of both the actor and set designer in the devising process of the performance from the sixteenth-century to the end of the nineteenth-century. The actor underwent an important shift in function in this period from the delivery of an unwritten text that is learned in the medieval oral tradition to a structured improvisation produced by the commedia dell 'arte. In this new form of theatre, a chef de troupe or an experienced actor shaped the story, but the text existed only through the improvisation of the actors. The preparation of those performances was, moreover, centred on acting technique and the individual skills of the actor. From this point, there is clear evidence that acting began to be the subject of a number of studies in the mid-sixteenth-century, and more significantly in the seventeenth-century, in Italy and France. This is revealed through the implementation of a system of notes written by the playwright to the actors (stage directions) in a range of plays (Gerard de Vivier, Comedie de la Fidelite Nuptiale, 1577). The thesis also focuses on Leoni de' Sommi (Quatro dialoghi, 1556 or 1565) who wrote about actors' techniques and introduced the meneur de jeu in Italy. The actor-manager (meneur de jeu), a professional actor, who scholars have compared to the director (see Strihan), trained the actors. Nothing, however, indicates that the actor-manager was directing the visual representation of the text in the performance space. From the end of the sixteenth-century, the dramatic text began to dominate the process of the performance and led to an expansion of acting techniques, such as the declamation. Stage designers carne from outside the theatre tradition and played a decisive role in the staging of religious celebrations (e.g. Actes des Apotres, 1536). In the sixteenth-century, both the proscenium arch and the borders, incorporated in the architecture of the new indoor theatres (theatre a l'italienne), contributed to create all kinds of illusions on the stage, principally the revival of perspective. This chapter shows ongoing audience demands for more elaborate visual effects on the stage. This led, throughout the classical age, and even more so during the eighteenth-century, to grant the stage design practitioner a major role in the making of the performance (see Ciceri). The second chapter demonstrates that the guidance of the actors and the scenographic conception, which are the artistic components of the role of the director, appear to have developed independently from one another until the nineteenth-century. The third chapter investigates the emergence of the director per se. The causes for this have been considered by a number of scholars, who have mainly identified two: the influence of Naturalism (illustrated by the Meiningen Company, Antoine, and Stanislavski) and the invention of electric lighting. The influence of the Naturalist movement on the emergence of the modem director in the late nineteenth-century is often considered as a radical factor in the history of theatre practice. Naturalism undoubtedly contributed to changes in staging, costume and lighting design, and to a more rigorous commitment to the harmonisation and visualisation of the overall production of the play. Although the art of theatre was dependent on the dramatic text, scholars (Osborne) demonstrate that the Naturalist directors did not strictly follow the playwright's indications written in the play in the late nineteenth-century. On the other hand, the main characteristic of directing in Naturalism at that time depended on a comprehensive understanding of the scenography, which had to respond to the requirements of verisimilitude. Electric lighting contributed to this by allowing for the construction of a visual narrative on stage. However, it was a master technician, rather than an emergent director, who was responsible for key operational decisions over how to use this emerging technology in venues such as the new Bayreuth theatre in 1876. Electric lighting reflects a normal technological evolution and cannot be considered as one of the main causes of the emergence of the director. Two further causes of the emergence of the director, not considered in previous studies, are the invention of cinema and the Symbolist movement (Lugne-Poe, Meyerhold). Cinema had an important technological influence on the practitioners of the Naturalist movement. In order to achieve a photographic truth on the stage (tableau, image), Naturalist directors strove to decorate the stage with the detailed elements that would be expected to be found if the situation were happening in reality. Film production had an influence on the work of actors (Walter). The filmmaker took over a primary role in the making of the film, as the source of the script, the filming process and the editing of the film. This role influenced the conception that theatre directors had of their own work. It is this concept of the director which influenced the development of the theatre director. As for the Symbolist movement, the director's approach was to dematerialise the text of the playwright, trying to expose the spirit, movement, colour and rhythm of the text. Therefore, the Symbolists disengaged themselves from the material aspect of the production, and contributed to give greater artistic autonomy to the role of the director. Although the emergence of the director finds its roots amongst the Naturalist practitioners (through a rigorous attempt to provide a strict visual interpretation of the text on stage), the Symbolist director heralded the modem perspective of the making of performance. The emergence of the director significantly changed theatre practice and theory. For instance, the rehearsal period became a clear work in progress, a platform for both developing practitioners' techniques and staging the show. This chapter explores and contrasts several practitioners' methods based on the two aspects proposed for the definition of the director (guidance of the actors and materialisation of a visual space). The fourth chapter argues that the role of the director became stronger, more prominent, and more hierarchical, through a more political and didactic approach to theatre as exemplified by the cases of France and Germany at the end of the nineteenth-century and through the First World War. This didactic perspective to theatre defines the notion of political theatre. Political theatre is often approached by the literature (Esslin, Willett) through a Marxist interpretation of the great German directors' productions (Reinhardt, Piscator, Brecht). These directors certainly had a great influence on many directors after the Second World War, such as Jean Vilar, Judith Molina, Jean-Louis Barrault, Roger Planchon, Augusto Boal, and others. This chapter demonstrates, moreover, that the director was confirmed through both ontological and educational approaches to the process of making the performance, and consequently became a central and paternal figure in the organisational and structural processes practiced within her/his theatre company. In this way, the stance taken by the director influenced the State authorities in establishing theatrical policy. This is an entirely novel scholarly contribution to the study of the director. The German and French States were not indifferent to the development of political theatre. A network of public theatres was thus developed in the inter-war period, and more significantly after the Second World War. The fifth chapter shows how State theatre policies establish its sources in the development of political theatre, and more specifically in the German theatre trade union movement (Volksbiihne) and the great directors at the end of the nineteenth-century. French political theatre was more influenced by playwrights and actors (Romain Rolland, Louise Michel, Louis Lumet, Emile Berny). French theatre policy was based primarily on theatre directors who decentralised their activities in France during both the inter-war period and the German occupation. After the Second World War, the government established, through directors, a strong network of public theatres. Directors became both the artistic director and the executive director of those institutionalised theatres. The institution was, however, seriously shaken by the social and political upheaval of 1968. It is the link between the State and the institution in which established directors were entangled that was challenged by the young emerging directors who rejected institutionalised responsibility in favour of the autonomy of the artist in the 1960s. This process is elucidated in chapter five. The final chapter defines the contemporary role of the director in contrasting thework of a number of significant young theatre practitioners in the 1960s such as Peter Brook, Ariane Mnouchkine, The Living Theater, Jerzy Grotowski, Augusto Boal, Eugenio Barba, all of whom decided early on to detach their companies from any form of public funding. This chapter also demonstrates how they promoted new forms of performance such as the performance of the self. First, these practitioners explored new performance spaces outside the traditional theatre building. Producing performances in a non-dedicated theatre place (warehouse, street, etc.) was a more frequent practice in the 1960s than before. However, the recent development of cybertheatre questions both the separation of the audience and the practitioners and the place of the director's role since the 1990s. Secondly, the role of the director has been multifaceted since the 1960s. On the one hand, those directors, despite all their different working methods, explored western and non-western acting techniques based on both personal input and collective creation. They challenged theatrical conventions of both the character and the process of making the performance. On the other hand, recent observations and studies distinguish the two main functions of the director, the acting coach and the scenographe, both having found new developments in cinema, television, and in various others events. Thirdly, the contemporary director challenges the performance of the text. In this sense, Antonin Artaud was a visionary. His theatre illustrates the need for the consideration of the totality of the text, as well as that of theatrical production. By contrasting the theories of Artaud, based on a non-dramatic form of theatre, with one of his plays (Le Jet de Sang), this chapter demonstrates how Artaud examined the process of making the performance as a performance. Live art and autobiographical performance, both taken as directing the se(f, reinforce this suggestion. Finally, since the 1990s, autobiographical performance or the performance of the self is a growing practical and theoretical perspective in both performance studies and psychology-related studies. This relates to the premise that each individual is making a representation (through memory, interpretation, etc.) of her/his own life (performativity). This last section explores the links between the place of the director in contemporary theatre and performers in autobiographical practices. The role of the traditional actor is challenged through non-identification of the character in the play, while performers (such as Chris Burden, Ron Athey, Orlan, Franko B, Sterlac) have, likewise, explored their own story/life as a performance. The thesis demonstrates the validity of the four parameters (performer, performance space, devising process, social event) defining a generic approach to the director. A generic perspective on the role of the director would encompass: a historical dimension relative to the reasons for and stages of the 'emergence' of the director; a socio-political analysis concerning the relationship between the director, her/his institutionalisation, and the political realm; and the relationship between performance theory, practice and the contemporary role of the director. Such a generic approach is a new departure in theatre research and might resonate in the study of other collaborative artistic practices.
Resumo:
Adopting an institutional approach from organization studies, this paper explores the role of key actors on “purposeful governance for sustainability” (Smith, Voss et al. 2010: 444) through the case of smart metering in the UK. Institutions are enduring patterns in social life, reflected in identities, routines, rules, shared meanings and social relations, which enable, and constrain, the beliefs and behaviours of individual and collective actors within a field (Thornton and Ocasio 2008). Large-scale external initiatives designed to drive regime-level change prompt ‘institutional entrepreneurs’ to perform ‘institutional work’ – “purposive action aimed at creating, maintaining and disrupting institutions” (Lawrence and Suddaby, 2006). Organization scholars are giving increasing attention to ‘field-configuring events’ (FCEs) which provide social spaces for diverse organizational actors to come together to collectively shape socio-technical pathways (Lampel and Meyer 2008). Our starting point for this exploratory study is that FCEs can offer important insights to the dynamics, politics and governance of sustainability transitions. Methodologically, FCEs allow us to observe and “link field evolution at the macro-level with individual action at the micro-level” (Lampel and Meyer, 2008: 1025). We examine the work of actors during a series of smart metering industry forums over a three-year period (industry presentations [n= 77] and panel discussions [n= 16]). The findings reveal new insights about how institutional change unfolds, alongside technological transitions, in ways that are partial and aligned with the interests of powerful incumbents whose voices are frequently heard at FCEs. The paper offers three contributions. First, the study responds to calls for more research examining FCEs and the role they play in transforming institutional fields. Second, the emergent findings extend research on institutional work by advancing our understanding of a specific site of institutional work, namely a face-to-face inter-organizational arena. Finally, in line with the research agenda for innovation studies and sustainability transitions elaborated by Smith et al (2010), the paper illustrates how actors in a social system respond to, translate, and enact interventions designed to promote industrial transformation, ultimately shaping the sustainability transition pathway.
Resumo:
Missing in the organizational learning literature is an integrative framework that reflects the emotional as well as the cognitive dynamics involved. Here, we take a step in this direction by focusing in depth over time (five years) on a selected organization which manufactures electronic equipment for the office industry. Drawing on personal construct theory, we define organizational learning as the collective re-construal of meaning in the direction of strategically significant themes. We suggest that emotions arise as members reflect on progress or lack of progress in achieving organizational learning. Our evidence suggests that invalidation - where organizational learning fails to correspond with expectations - gives rise to anxiety and frustration, while validation - where organizational learning is aligned with or exceeds expectations - evokes comfort or excitement. Our work aims to capture the key emotions involved as organizational learning proceeds. © The Author(s) 2012.
Resumo:
This paper explores the role of transactive memory in enabling knowledge transfer between globally distributed teams. While the information systems literature has recently acknowledged the role transactive memory plays in improving knowledge processes and performance in colocated teams, little is known about its contribution to distributed teams. To contribute to filling this gap, knowledge-transfer challenges and processes between onsite and offshore teams were studied at TATA Consultancy Services. In particular, the paper describes the transfer of knowledge between onsite and offshore teams through encoding, storing and retrieving processes. An in-depth case study of globally distributed software development projects was carried out, and a qualitative, interpretive approach was adopted. The analysis of the case suggests that in order to overcome differences derived from the local contexts of the onsite and offshore teams (e.g. different work routines, methodologies and skills), some specific mechanisms supporting the development of codified and personalized ‘directories’ were introduced. These include the standardization of templates and methodologies across the remote sites as well as frequent teleconferencing sessions and occasional short visits. These mechanisms contributed to the development of the notion of ‘who knows what’ across onsite and offshore teams despite the challenges associated with globally distributed teams, and supported the transfer of knowledge between onsite and offshore teams. The paper concludes by offering theoretical and practical implications.