877 resultados para process and actions
Resumo:
In the early part of 2008, a major political upset was pulled off in the Southeast Asian nation of Malaysia when the ruling coalition, Barisan Nasional (National Front), lost its long-held parliamentary majority after the general elections. Given the astonishingly high profile of political bloggers and relatively well established alternative online new sites within the nation, it was not surprising that many new media proponents saw the result as a major triumph of the medium. Through a brief account of the Hindraf (Hindu Rights Action Force) saga and the socio-political dissent nursed, in part, through new media in contemporary Malaysia, this paper seeks to lend context to the events that precede and surround the election as an example of the relationship between media and citizenship in praxis. In so doing it argues that the political turnaround, if indeed it proves to be, cannot be considered the consequence of new media alone. Rather, that to comprehensively assess the implications of new media for citizenship is to take into account the specific histories, conditions and actions (or lack of) of the various social actors involved.
Resumo:
This paper argues that management education needs to consider a trend in learning design which advances more creative learning through an alliance with art-based pedagogical processes. A shift is required from skills training to facilitating transformational learning through experiences that expand human potential, facilitated by artistic processes. In this paper the authors discuss the necessity for creativity and innovation in the workplace and the need to develop better leaders and managers. The inclusion of arts-based processes enhances artful behaviour, aesthetics and creativity within management and organisational behaviour, generating important implications for business innovation. This creative learning focus stems from an analysis of an arts-based intervention for management development. Entitled Management Jazz the program was conducted over three years at a large Australian University. The paper reviews some of the salient literature in the field. It considers four stages of the learning process: capacity, artful event, increased capability, and application/action to produce product. One illustrative example of an arts-based learning process is provided from the Management Jazz program. Research findings indicate that artful learning opportunities enhance capacity for awareness of creativity in one’s self and in others. This capacity correlates positively with a perception that engaging in artful learning enhances the capability of managers in changing collaborative relationships and habitat constraint. The authors conclude that it is through engagement and creative alliance with the arts that management education can explore and discover artful approaches to building creativity and innovation. The illustration presented in this paper will be delivered as a brief workshop at the Fourth Art of Management Conference. The process of bricolage and articles at hand will be used to explore creative constraints and prototypes while generating group collaboration. The mini-workshop will conclude with discussion of the arts-based process and capability enhancement outcomes.
Resumo:
This paper, underpinned by a framework of autopoietic principles of creativity/innovation and leadership/governance, argues that open forms of creativity in ‘arts’ provide opportunity for impact upon concepts of development, leadership and governance. The alliance of creativity and governance suggests that by examining various understandings of artistic experiences, readers may perceive new understandings of alliance, application and assessment of such experiences. This critical understanding would include assessing whether such experience supports people changing their aspirations as they become what they want to be. Such understanding may also suggest that different applications of the creative capacity of the ‘arts’ offers relevance in alleged ‘non-creative’ areas of academe, particularly in areas of management, leadership and governance. This alliance also offers the possibility of new staff development programs that facilitate learning and building of individual capacity, as well as facilitate congruent development process and policy, particularly within academic organisational structures.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
n the field of tissue engineering new polymers are needed to fabricate scaffolds with specific properties depending on the targeted tissue. This work aimed at designing and developing a 3D scaffold with variable mechanical strength, fully interconnected porous network, controllable hydrophilicity and degradability. For this, a desktop-robot-based melt-extrusion rapid prototyping technique was applied to a novel tri-block co-polymer, namely poly(ethylene glycol)-block-poly(epsi-caprolactone)-block-poly(DL-lactide), PEG-PCL-P(DL)LA. This co-polymer was melted by electrical heating and directly extruded out using computer-controlled rapid prototyping by means of compressed purified air to build porous scaffolds. Various lay-down patterns (0/30/60/90/120/150°, 0/45/90/135°, 0/60/120° and 0/90°) were produced by using appropriate positioning of the robotic control system. Scanning electron microscopy and micro-computed tomography were used to show that 3D scaffold architectures were honeycomb-like with completely interconnected and controlled channel characteristics. Compression tests were performed and the data obtained agreed well with the typical behavior of a porous material undergoing deformation. Preliminary cell response to the as-fabricated scaffolds has been studied with primary human fibroblasts. The results demonstrated the suitability of the process and the cell biocompatibility of the polymer, two important properties among the many required for effective clinical use and efficient tissue-engineering scaffolding.
Resumo:
Objectives: This methodological paper reports on the development and validation of a work sampling instrument and data collection processes to conduct a national study of nurse practitioners’ work patterns. ---------- Design: Published work sampling instruments provided the basis for development and validation of a tool for use in a national study of nurse practitioner work activities across diverse contextual and clinical service models. Steps taken in the approach included design of a nurse practitioner-specific data collection tool and development of an innovative web-based program to train and establish inter rater reliability of a team of data collectors who were geographically dispersed across metropolitan, rural and remote health care settings. ---------- Setting: The study is part of a large funded study into nurse practitioner service. The Australian Nurse Practitioner Study is a national study phased over three years and was designed to provide essential information for Australian health service planners, regulators and consumer groups on the profile, process and outcome of nurse practitioner service. ---------- Results: The outcome if this phase of the study is empirically tested instruments, process and training materials for use in an international context by investigators interested in conducting a national study of nurse practitioner work practices. ---------- Conclusion: Development and preparation of a new approach to describing nurse practitioner practices using work sampling methods provides the groundwork for international collaboration in evaluation of nurse practitioner service.
Resumo:
Introduction: The purpose of this study was to assess the capacity of a written intervention, in this case a patient information brochure, to improve patient satisfaction during an Emergency Department (ED) visit. For the purpose of measuring the effect of the intervention the ED journey was conceptualised as a series of distinct areas of service comprising waiting time, service by the triage nurse, care from doctors and nurses and information giving Background of study: Research into patient satisfaction has become a widespread activity endorsed by both governments and hospital administrations. The literature on ED patient satisfaction has consistently indicated three primary areas of patient dissatisfaction: waiting time, nursing care and communication. Recent developments in the literature on patient satisfaction studies however have highlighted the relationship between patients. expectations of a service encounter and their consequent assessment of the experience as dissatisfying or satisfying. Disconfirmation theory posits that the degree to which expectations are confirmed will affect subsequent levels of satisfaction. The conceptual framework utilised in this study is Coye.s (2004) model of disconfirmation. Coye while reiterating satisfaction is a consequence of the degree expectations are either confirmed or disconfirmed also posits that expectations can be modified by interventions. Coye.s work conceptualises these interventions as intra encounter experiences (cues) which function to adjust expectations. Coye suggests some cues are unintended and may have a negative impact which also reinforces the value of planned cues intended to meet or exceed consumer expectations. Consequently the brochure can be characterized as a potentially positive cue, encouraging the patient to understand processes and to orient them in what can be a confronting environment. Only a limited number of studies have examined the effect of written interventions within an ED. No studies could be located which have tested the effect of ED interventions using a conceptual framework which relates the effect of the degree to which expectations are confirmed or disconfirmed in terms of satisfaction with services. Method: Two studies were conducted. Study One used qualitative methods to explore patients. expectations of the ED from the perspective of both patients and health care professionals. Study One was used in part to direct the development of the intervention (brochure) in Study Two. The brochure was an intervention designed to modify patients. expectations thus increasing their satisfaction with the provision of ED service. As there was no existing tools to measure ED patients. expectations and satisfaction a new tool was also developed based on the findings and the literature of Study One. Study Two used a non-randomised, quasi-experimental approach using a non-equivalent post-test only comparison group design used to investigate the effect of the patient education brochure (Stommel and Wills, 2004). The brochure was disseminated to one of two study groups (the intervention group). The effect of the brochure was assessed by comparing the data obtained from both the intervention and control group. These two groups consisted of 150 participants each. It was expected that any differences in the relevant domains selected for examination would indicate the effect of the brochure both on expectation and potentially satisfaction. Results: Study One revealed several areas of common ground between patients and nurses in terms of relevant content for the written intervention, including the need for information on the triage system and waiting times. Areas of difference were also found with patients emphasizing communication issues, whereas focus group members expressed concern that patients were often unable to assimilate verbal information. The findings suggested the potential utility of written material to reinforce verbal communication particularly in terms of the triage process and other ED protocols. This material was synthesized within the final version of the written intervention. Overall the results of Study Two indicated no significant differences between the two groups. The intervention group did indicate a significant number of participants who viewed the brochure of having changed their expectations. The effect of the brochure may have been obscured by a lack of parity between the two groups as the control group presented with statistically significantly higher levels of acuity and experienced significantly shorter waiting times. In terms of disconfirmation theory this would suggest expectations that had been met or exceeded. The results confirmed the correlation of expectations with satisfaction. Several domains also indicated age as a significant predictor with older patients tending to score higher satisfaction results. Other significant predictors of satisfaction established were waiting time and care from nurses, reinforcing the combination of efficient service and positive interpersonal experiences as being valued by patients. Conclusions: Information presented in written form appears to benefit a significant number of ED users in terms of orientation and explaining systems and procedures. The degree to which these effects may interact with other dimensions of satisfaction however is likely to be limited. Waiting time and interpersonal behaviours from staff also provide influential cues in determining satisfaction. Written material is likely to be one element in a series of coordinated strategies to improve patient satisfaction during periods of peak demand.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
This paper presents the results of a structural equation model (SEM) that describes and quantifies the relationships between corporate culture and safety performance. The SEM is estimated using 196 individual questionnaire responses from three companies with better than average safety records. A multiattribute analysis of corporate safety culture characteristics resulted in a hierarchical description of corporate safety culture comprised of three major categories — people, process, and value. These three major categories were decomposed into 54 measurable questions and used to develop a questionnaire to quantify corporate safety culture. The SEM identified five latent variables that describe corporate safety culture: (1) a company’s safety commitment; (2) the safety incentives that are offered to field personal for safe performance; (3) the subcontractor involvement in the company culture; (4) the field safety accountability and dedication; and (5) the disincentives for unsafe behaviors. These characteristics of company safety culture serve as indicators for a company’s safety performance. Based on the findings from this limited sample of three companies, this paper proposes a list of practices that companies may consider to improve corporate safety culture and safety performance. A more comprehensive study based on a larger sample is recommended to corroborate the findings of this study.
Resumo:
Entrepreneurship, creativity, and design are all ingredients of the innovation process and are sometimes confused, misapplied, and used interchangeably. This conceptual paper responds to recent calls for further investigation of the links between entrepreneurship and related disciplines, and explores a solution focused approach most strongly developed and applied in new product and enterprise development — that of design and design thinking. The paper extends prior research on entrepreneurship, creativity, and design, and argues for tighter links between these notions in the establishment and ongoing evolution of enterprises.
Resumo:
The structures of the anhydrous 1:1 proton-transfer compounds of isonipecotamide (piperidine-4-carboxamide) with the three isomeric mononitro-substituted benzoic acids and 3,5-dinitrobenzoic acid, namely 4-carbamoylpiperidinium 2-nitrobenzoate (I), 4-carbamoylpiperidinium 3-nitrobenzoate (II), 4-carbamoylpiperidinium 4-nitrobenzoate (III), (C6H13N2O+ C7H4NO4-) and 4-carbamoylpiperidinium 3,5-dinitrobenzoate (IV) (C6H13N2O+ C7H5N2O6-)respectively, have been determined at 200 K. All salts form hydrogen-bonded structures: three-dimensional in (I), two-dimensional in (II) and (III) and one-dimensional in (IV). Featured in the hydrogen bonding of three of these [(I), (II) and (IV)] is the cyclic head-to-head amide--amide homodimer motif [graph set R2/2~(8)] through a duplex N---H...O association, the dimer then giving structure extension via either piperidinium or amide H-donors and carboxylate-O and in some examples [(II) and (IV)], nitro-O atom acceptors. In (I), the centrosymmetric amide-amide homodimers are expanded laterally through N-H...O hydrogen bonds via cyclic R2/4(8) interactions forming ribbons which extend along the c cell direction. These ribbons incorporate the 2-nitrobenzoate cations through centrosymmetric cyclic piperidine N-H...O(carboxyl) associations [graph set R4/4(12)], giving inter-connected sheets in the three-dimensional structure. In (II) in which no amide-amide homodimer is present, duplex piperidinium N-H...O(amide) hydrogen-bonding homomolecular associations [graph set R2/2(14)] give centrosymmetric head-to-tail dimers. Structure extension occurs through hydrogen-bonding associations between both the amide H-donors and carboxyl and nitro O-acceptors as well as a three-centre piperidinium N-H...O,O'(carboxyl) cyclic R2/1(4) association giving the two-dimensional network structure. In (III), the centrosymmetric amide-amide dimers are linked through the two carboxyl O-atom acceptors of the anions via bridging piperidinium and amide N-H...O,O'...H-N(amide) hydrogen bonds giving the two-dimensional sheet structure which features centrosymmetric cyclic R4/4(12) associations. In (IV), the amide-amide dimer is also centrosymmetric with the dimers linked to the anions through amide N-H...O(nitro) interactions. The piperidinium groups extend the structure into one-dimensional ribbons via N-H...O(carboxyl) hydrogen bonds. The structures reported here further demonstrate the utility of the isonipecotamide cation in molecular assembly and highlight the efficacy of the cyclic R2/2(8) amide-amide hydrogen-bonding homodimer motif in this process and provide an additional homodimer motif type in the head-to-tail R2/2(14) association.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.
Resumo:
Objective: Empowerment is a complex process of psychological, social, organizational and structural change. It allows individuals and groups to achieve positive growth and effectively address the social and psychological impacts of historical oppression, marginalization and disadvantage. The Growth and Empowerment Measure (GEM) was developed to measure change in dimensions of empowerment as defi ned and described by Aboriginal Australians who participated in the Family Well Being programme.---------- Method: The GEM has two components: a 14-item Emotional Empowerment Scale (EES14) and 12 Scenarios (12S). It is accompanied by the Kessler 6 Psychological Distress Scale (K6), supplemented by two questions assessing frequency of happy and angry feelings. For validation, the measure was applied with 184 Indigenous Australian participants involved in personal and/or organizational social health activities.---------- Results: Psychometric analyses of the new instruments support their validity and reliability and indicate two-component structures for both the EES (Self-capacity; Inner peace) and the 12S (Healing and enabling growth, Connection and purpose). Strong correlations were observed across the scales and subscales. Participants who scored higher on the newly developed scales showed lower distress on the K6, particularly when the two additional questions were included. However, exploratory factor analyses demonstrated that GEM subscales are separable from the Kessler distress measure.---------- Conclusion: The GEM shows promise in enabling measurement and enhancing understanding of both process and outcome of psychological and social empowerment within an Australian Indigenous context.
Resumo:
At QUT research data refers to information that is generated or collected to be used as primary sources in the production of original research results, and which would be required to validate or replicate research findings (Callan, De Vine, & Baker, 2010). Making publicly funded research data discoverable by the broader research community and the public is a key aim of the Australian National Data Service (ANDS). Queensland University of Technology (QUT) has been innovating in this space by undertaking mutually dependant technical and content (metadata) focused projects funded by ANDS. Research Data Librarians identified and described datasets generated from Category 1 funded research at QUT, by interviewing researchers, collecting metadata and fashioning metadata records for upload to the Australian Research Data commons (ARDC) and exposure through the Research Data Australia interface. In parallel to this project, a Research Data Management Service and Metadata hub project were being undertaken by QUT High Performance Computing & Research Support specialists. These projects will collectively store and aggregate QUT’s metadata and research data from multiple repositories and administration systems and contribute metadata directly by OAI-PMH compliant feed to RDA. The pioneering nature of the work has resulted in a collaborative project dynamic where good data management practices and the discoverability and sharing of research data were the shared drivers for all activity. Each project’s development and progress was dependent on feedback from the other. The metadata structure evolved in tandem with the development of the repository and the development of the repository interface responded to meet the needs of the data interview process. The project environment was one of bottom-up collaborative approaches to process and system development which matched top-down strategic alliances crossing organisational boundaries in order to provide the deliverables required by ANDS. This paper showcases the work undertaken at QUT, focusing on the Seeding the Commons project as a case study, and illustrates how the data management projects are interconnected. It describes the processes and systems being established to make QUT research data more visible and the nature of the collaborations between organisational areas required to achieve this. The paper concludes with the Seeding the Commons project outcomes and the contribution this project made to getting more research data ‘out there’.