815 resultados para Self-concept analysis
Resumo:
Adolescent drinking is a significant issue yet valid psychometric tools designed for this group are scarce. The Drinking Refusal Self-Efficacy Questionnaire—Revised Adolescent Version (DRSEQ-RA) is designed to assess an individual's belief in their ability to resist drinking alcohol. The original DRSEQ-R consists of three factors reflecting social pressure refusal self-efficacy, opportunistic refusal self-efficacy and emotional relief refusal self-efficacy. A large sample of 2020 adolescents aged between 12 and 19 years completed the DRSEQ and measures of alcohol consumption in small groups. Using confirmatory factor analysis, the three factor structure was confirmed. All three factors were negatively correlated with both frequency and volume of alcohol consumption. Drinkers reported lower drinking refusal self-efficacy than non-drinkers. Taken together, these results suggest that the adolescent version of the Drinking Refusal Self-Efficacy Questionnaire (DRSEQ-RA) is a reliable and valid measure of drinking refusal self-efficacy.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
BACKGROUND: Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. OBJECTIVES: To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. METHODS: The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. SETTING AND PARTICIPANTS: A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. RESULTS: Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. CONCLUSIONS: Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed.
Resumo:
Scientific discoveries, developments in medicine and health issues are the constant focus of media attention and the principles surrounding the creation of so called ‘saviour siblings’ are of no exception. The development in the field of reproductive techniques has provided the ability to genetically analyse embryos created in the laboratory to enable parents to implant selected embryos to create a tissue-matched child who may be able to cure an existing sick child. The research undertaken in this thesis examines the regulatory frameworks overseeing the delivery of assisted reproductive technologies (ART) in Australia and the United Kingdom and considers how those frameworks impact on the accessibility of in vitro fertilisation (IVF) procedures for the creation of ‘saviour siblings’. In some jurisdictions, the accessibility of such techniques is limited by statutory requirements. The limitations and restrictions imposed by the state in relation to the technology are analysed in order to establish whether such restrictions are justified. The analysis is conducted on the basis of a harm framework. The framework seeks to establish whether those affected by the use of the technology (including the child who will be created) are harmed. In order to undertake such evaluation, the concept of harm is considered under the scope of John Stuart Mill’s liberal theory and the Harm Principle is used as a normative tool to judge whether the level of harm that may result, justifies state intervention or restriction with the reproductive decision-making of parents in this context. The harm analysis conducted in this thesis seeks to determine an appropriate regulatory response in relation to the use of pre-implantation tissue-typing for the creation of ‘saviour siblings’. The proposals outlined in the last part of this thesis seek to address the concern that harm may result from the practice of pre-implantation tissue-typing. The current regulatory frameworks in place are also analysed on the basis of the harm framework established in this thesis. The material referred to in this thesis reflects the law and policy in place in Australia and the UK at the time the thesis was submitted for examination (December 2009).
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
This work is a digital version of a dissertation that was first submitted in partial fulfillment of the Degree of Doctor of Philosophy at the Queensland University of Technology (QUT) in March 1994. The work was concerned with problems of self-organisation and organisation ranging from local to global levels of hierarchy. It considers organisations as living entities from local to global things that a living entity – more particularly, an individual, a body corporate or a body politic - must know and do to maintain an existence – that is to remain viable – or to be sustainable. The term ‘land management’ as used in 1994 was later subsumed into a more general concept of ‘natural resource management’ and then merged with ideas about sustainable socioeconomic and sustainable ecological development. The cybernetic approach contains many cognitive elements of human observation, language and learning that combine into production processes. The approach tends to highlight instances where systems (or organisations) can fail because they have very little chance of succeeding. Thus there are logical necessities as well as technical possibilities in designing, constructing, operating and maintaining production systems that function reliably over extended periods. Chapter numbers and titles to the original thesis are as follows: 1. Land management as a problem of coping with complexity 2. Background theory in systems theory and cybernetic principles 3. Operationalisation of cybernetic principles in Beer’s Viable System Model 4. Issues in the design of viable cadastral surveying and mapping organisation 5. An analysis of the tendency for fragmentation in surveying and mapping organisation 6. Perambulating the boundaries of Sydney – a problem of social control under poor standards of literacy 7. Cybernetic principles in the process of legislation 8. Closer settlement policy and viability in agricultural production 9. Rate of return in leasing Crown lands
Study of industrially relevant boundary layer and axisymmetric flows, including swirl and turbulence
Resumo:
Micropolar and RNG-based modelling of industrially relevant boundary layer and recirculating swirling flows is described. Both models contain a number of adjustable parameters and auxiliary conditions that must be either modelled or experimentally determined, and the effects of varying these on the resulting flow solutions is quantified. To these ends, the behaviour of the micropolar model for self-similar flow over a surface that is both stretching and transpiring is explored in depth. The simplified governing equations permit both analytic and numerical approaches to be adopted, and a number of closed form solutions (both exact and approximate) are obtained using perturbation and order of magnitude analyses. Results are compared with the corresponding Newtonian flow solution in order to highlight the differences between the micropolar and classical models, and significant new insights into the behaviour of the micropolar model are revealed for this flow. The behaviour of the RNG-bas based models for swirling flow with vortex breakdown zones is explored in depth via computational modelling of two experimental data sets and an idealised breakdown flow configuration. Meticulous modeling of upstream auxillary conditions is required to correctly assess the behavior of the models studied in this work. The novel concept of using the results to infer the role of turbulence in the onset and topology of the breakdown zone is employed.
Resumo:
Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.
Resumo:
The Mount Isa Basin is a new concept used to describe the area of Palaeo- to Mesoproterozoic rocks south of the Murphy Inlier and inappropriately described presently as the Mount Isa Inlier. The new basin concept presented in this thesis allows for the characterisation of basin-wide structural deformation, correlation of mineralisation with particular lithostratigraphic and seismic stratigraphic packages, and the recognition of areas with petroleum exploration potential. The northern depositional margin of the Mount Isa Basin is the metamorphic, intrusive and volcanic complex here referred to as the Murphy Inlier (not the "Murphy Tectonic Ridge"). The eastern, southern and western boundaries of the basin are obscured by younger basins (Carpentaria, Eromanga and Georgina Basins). The Murphy Inlier rocks comprise the seismic basement to the Mount Isa Basin sequence. Evidence for the continuity of the Mount Isa Basin with the McArthur Basin to the northwest and the Willyama Block (Basin) at Broken Hill to the south is presented. These areas combined with several other areas of similar age are believed to have comprised the Carpentarian Superbasin (new term). The application of seismic exploration within Authority to Prospect (ATP) 423P at the northern margin of the basin was critical to the recognition and definition of the Mount Isa Basin. The Mount Isa Basin is structurally analogous to the Palaeozoic Arkoma Basin of Illinois and Arkansas in southern USA but, as with all basins it contains unique characteristics, a function of its individual development history. The Mount Isa Basin evolved in a manner similar to many well described, Phanerozoic plate tectonic driven basins. A full Wilson Cycle is recognised and a plate tectonic model proposed. The northern Mount Isa Basin is defined as the Proterozoic basin area northwest of the Mount Gordon Fault. Deposition in the northern Mount Isa Basin began with a rift sequence of volcaniclastic sediments followed by a passive margin drift phase comprising mostly carbonate rocks. Following the rift and drift phases, major north-south compression produced east-west thrusting in the south of the basin inverting the older sequences. This compression produced an asymmetric epi- or intra-cratonic clastic dominated peripheral foreland basin provenanced in the south and thinning markedly to a stable platform area (the Murphy Inlier) in the north. The fmal major deformation comprised east-west compression producing north-south aligned faults that are particularly prominent at Mount Isa. Potential field studies of the northern Mount Isa Basin, principally using magnetic data (and to a lesser extent gravity data, satellite images and aerial photographs) exhibit remarkable correlation with the reflection seismic data. The potential field data contributed significantly to the unravelling of the northern Mount Isa Basin architecture and deformation. Structurally, the Mount Isa Basin consists of three distinct regions. From the north to the south they are the Bowthorn Block, the Riversleigh Fold Zone and the Cloncurry Orogen (new names). The Bowthom Block, which is located between the Elizabeth Creek Thrust Zone and the Murphy Inlier, consists of an asymmetric wedge of volcanic, carbonate and clastic rocks. It ranges from over 10 000 m stratigraphic thickness in the south to less than 2000 min the north. The Bowthorn Block is relatively undeformed: however, it contains a series of reverse faults trending east-west that are interpreted from seismic data to be down-to-the-north normal faults that have been reactivated as thrusts. The Riversleigh Fold Zone is a folded and faulted region south of the Bowthorn Block, comprising much of the area formerly referred to as the Lawn Hill Platform. The Cloncurry Orogen consists of the area and sequences equivalent to the former Mount Isa Orogen. The name Cloncurry Orogen clearly distinguishes this area from the wider concept of the Mount Isa Basin. The South Nicholson Group and its probable correlatives, the Pilpah Sandstone and Quamby Conglomerate, comprise a later phase of now largely eroded deposits within the Mount Isa Basin. The name South Nicholson Basin is now outmoded as this terminology only applied to the South Nicholson Group unlike the original broader definition in Brown et al. (1968). Cored slimhole stratigraphic and mineral wells drilled by Amoco, Esso, Elf Aquitaine and Carpentaria Exploration prior to 1986, penetrated much of the stratigraphy and intersected both minor oil and gas shows plus excellent potential source rocks. The raw data were reinterpreted and augmented with seismic stratigraphy and source rock data from resampled mineral and petroleum stratigraphic exploration wells for this study. Since 1986, Comalco Aluminium Limited, as operator of a joint venture with Monument Resources Australia Limited and Bridge Oil Limited, recorded approximately 1000 km of reflection seismic data within the basin and drilled one conventional stratigraphic petroleum well, Beamesbrook-1. This work was the first reflection seismic and first conventional petroleum test of the northern Mount Isa Basin. When incorporated into the newly developed foreland basin and maturity models, a grass roots petroleum exploration play was recognised and this led to the present thesis. The Mount Isa Basin was seen to contain excellent source rocks coupled with potential reservoirs and all of the other essential aspects of a conventional petroleum exploration play. This play, although high risk, was commensurate with the enormous and totally untested petroleum potential of the basin. The basin was assessed for hydrocarbons in 1992 with three conventional exploration wells, Desert Creek-1, Argyle Creek-1 and Egilabria-1. These wells also tested and confrrmed the proposed basin model. No commercially viable oil or gas was encountered although evidence of its former existence was found. In addition to the petroleum exploration, indeed as a consequence of it, the association of the extensive base metal and other mineralisation in the Mount Isa Basin with hydrocarbons could not be overlooked. A comprehensive analysis of the available data suggests a link between the migration and possible generation or destruction of hydrocarbons and metal bearing fluids. Consequently, base metal exploration based on hydrocarbon exploration concepts is probably. the most effective technique in such basins. The metal-hydrocarbon-sedimentary basin-plate tectonic association (analogous to Phanerozoic models) is a compelling outcome of this work on the Palaeo- to Mesoproterozoic Mount lsa Basin. Petroleum within the Bowthom Block was apparently destroyed by hot brines that produced many ore deposits elsewhere in the basin.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
This paper demonstrates a model of self-regulation based on a qualitative research project with adult learners undertaking an undergraduate degree. The narrative about the participant’s life transitions, co-constructed with the researcher, yielded data about their generalised self-efficacy and resulted in a unique self-efficacy narrative for each participant. A model of self-regulation is proposed with potential applications for coaching, counselling and psychotherapy. A narrative method was employed to construct narratives about an individual’s self-efficacy in relation to their experience of learning and life transitions. The method involved a cyclical and iterative process using qualitative interviews to collect life history data from participants. In addition, research participants completed reflective homework tasks, and this data was included in the participant’s narratives. A highly collaborative method entailed narratives being co-constructed by researcher and research participants as the participants were guided in reflecting on their experience in relation to learning and life transitions; the reflection focused on behaviour, cognitions and emotions that constitute a sense of self-efficacy. The analytic process used was narrative analysis, in which life is viewed as constructed and experienced through the telling and retelling of stories and hence the analysis is the creation of a coherent and resonant story. The method of constructing self-efficacy narratives was applied to a sample of mature aged students starting an undergraduate degree. The research outcomes confirmed a three-factor model of self-efficacy, comprising three interrelated stages: initiating action, applying effort, and persistence in overcoming difficulties. Evaluation of the research process by participants suggested that they had gained an enhanced understanding of self-efficacy from their participation in the research process, and would be able to apply this understanding to their studies and other endeavours in the future. A model of self-regulation is proposed as a means for coaches, counsellors and psychotherapists working from a narrative constructivist perspective to assist clients facing life transitions by helping them generate selfefficacious cognitions, emotions and behaviour.
Resumo:
This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.
Resumo:
This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.