958 resultados para Applied current


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimates of potential and actual C sequestration require areal information about various types of management activities. Forest surveys, land use data, and agricultural statistics contribute information enabling calculation of the impacts of current and historical land management on C sequestration in biomass (in forests) or in soil (in agricultural systems). Unfortunately little information exists on the distribution of various management activities that can impact soil C content in grassland systems. Limited information of this type restricts our ability to carry out bottom-up estimates of the current C balance of grasslands or to assess the potential for grasslands to act as C sinks with changes in management. Here we review currently available information about grassland management, how that information could be related to information about the impacts of management on soil C stocks, information that may be available in the future, and needs that remain to be filled before in-depth assessments may be carried out. We also evaluate constraints induced by variability in information sources within and between countries. It is readily apparent that activity data for grassland management is collected less frequently and on a coarser scale than data for forest or agricultural inventories and that grassland activity data cannot be directly translated into IPCC-type factors as is done for IPCC inventories of agricultural soils. However, those management data that are available can serve to delineate broad-scale differences in management activities within regions in which soil C is likely to change in response to changes in management. This, coupled with the distinct possibility of more intensive surveys planned in the future, may enable more accurate assessments of grassland C dynamics with higher resolution both spatially and in the number management activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the formalization and application of a methodology to evaluate the safety benefit of countermeasures in the face of uncertainty. To illustrate the methodology, 18 countermeasures for improving safety of at grade railroad crossings (AGRXs) in the Republic of Korea are considered. Akin to “stated preference” methods in travel survey research, the methodology applies random selection and laws of large numbers to derive accident modification factor (AMF) densities from expert opinions. In a full Bayesian analysis framework, the collective opinions in the form of AMF densities (data likelihood) are combined with prior knowledge (AMF density priors) for the 18 countermeasures to obtain ‘best’ estimates of AMFs (AMF posterior credible intervals). The countermeasures are then compared and recommended based on the largest safety returns with minimum risk (uncertainty). To the author's knowledge the complete methodology is new and has not previously been applied or reported in the literature. The results demonstrate that the methodology is able to discern anticipated safety benefit differences across candidate countermeasures. For the 18 at grade railroad crossings considered in this analysis, it was found that the top three performing countermeasures for reducing crashes are in-vehicle warning systems, obstacle detection systems, and constant warning time systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzy logic has been applied to control traffic at road junctions. A simple controller with one fixed rule-set is inadequate to minimise delays when traffic flow rate is time-varying and likely to span a wide range. To achieve better control, fuzzy rules adapted to the current traffic conditions are used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel modified theory based upon Rayleigh scattering of ultrasound from composite nanoparticles with a liquid core and solid shell. We derive closed form solutions to the scattering cross-section and have applied this model to an ultrasound contrast agent consisting of a liquid-filled core (perfluorooctyl bromide, PFOB) encapsulated by a polymer shell (poly-caprolactone, PCL). Sensitivity analysis was performed to predict the dependence of the scattering cross-section upon material and dimensional parameters. A rapid increase in the scattering cross-section was achieved by increasing the compressibility of the core, validating the incorporation of high compressibility PFOB; the compressibility of the shell had little impact on the overall scattering cross-section although a more compressible shell is desirable. Changes in the density of the shell and the core result in predicted local minima in the scattering cross-section, approximately corresponding to the PFOB-PCL contrast agent considered; hence, incorporation of a lower shell density could potentially significantly improve the scattering cross-section. A 50% reduction in shell thickness relative to external radius increased the predicted scattering cross-section by 50%. Although it has often been considered that the shell has a negative effect on the echogeneity due to its low compressibility, we have shown that it can potentially play an important role in the echogeneity of the contrast agent. The challenge for the future is to identify suitable shell and core materials that meet the predicted characteristics in order to achieve optimal echogenity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cutaneous cholecalciferol synthesis has not been considered in making recommendations for vitamin D intake. Our objective was to model the effects of sun exposure, vitamin D intake, and skin reflectance (pigmentation) on serum 25-hydroxyvitamin D (25[OH]D) in young adults with a wide range of skin reflectance and sun exposure. Four cohorts of participants (n = 72 total) were studied for 7-8 wk in the fall, winter, spring, and summer in Davis, CA [38.5° N, 121.7° W, Elev. 49 ft (15 m)]. Skin reflectance was measured using a spectrophotometer, vitamin D intake using food records, and sun exposure using polysulfone dosimeter badges. A multiple regression model (R^sup 2^ = 0.55; P < 0.0001) was developed and used to predict the serum 25(OH)D concentration for participants with low [median for African ancestry (AA)] and high [median for European ancestry (EA)] skin reflectance and with low [20th percentile, ~20 min/d, ~18% body surface area (BSA) exposed] and high (80th percentile, ~90 min/d, ~35% BSA exposed) sun exposure, assuming an intake of 200 IU/d (5 ug/d). Predicted serum 25(OH)D concentrations for AA individuals with low and high sun exposure in the winter were 24 and 42 nmol/L and in the summer were 40 and 60 nmol/L. Corresponding values for EA individuals were 35 and 60 nmol/L in the winter and in the summer were 58 and 85 nmol/L. To achieve 25(OH)D ≥75 nmol/L, we estimate that EA individuals with high sun exposure need 1300 IU/d vitamin D intake in the winter and AA individuals with low sun exposure need 2100-3100 IU/d year-round.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel model for the potentiostatic discharge of primary alkaline battery cathodes is presented. The model is used to simulate discharges resulting from the stepped potential electrochemical spectroscopy (SPECS) of primary alkaline battery cathodes cathodes, and the results are validated with experimental data. We show that a model based on a single (or mean) reaction framework can be used to simulate multi-reaction discharge behaviour and we develop a consistent functional modification to the kinetic equation of the model that allows for this to occur. The model is used to investigate the effects that the initial exchange current density, i00, and the diffusion coefficient for protons in electrolytic manganese dioxide (EMD), DH+, have on SPECS discharge. The behaviour observed is consistent with the idea that individual reduction reactions, within the multi-reaction, reduction behaviour of EMD, have distinct i00 and DH+ values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background/objectives The provision of the patient bed-bath is a fundamental nursing care activity yet few quantitative data and no qualitative data are available on registered nurses’ (RNs) clinical practice in this domain in the intensive care unit (ICU). The aim of this study was to describe ICU RNs current practice with respect to the timing, frequency and duration of the patient bed-bath and the cleansing and emollient agents used. Methods The study utilised a two-phase sequential explanatory mixed method design. Phase one used a questionnaire to survey RNs and phase two employed semi-structured focus group (FG) interviews with RNs. Data was collected over 28 days across four Australian metropolitan ICUs. Ethical approval was granted from the relevant hospital and university human research ethics committees. RNs were asked to complete a questionnaire following each episode of care (i.e. bed-bath) and then to attend one of three FG interviews: RNs with less than 2 years ICU experience; RNs with 2–5 years ICU experience; and RNs with greater than 5 years ICU experience. Results During the 28-day study period the four ICUs had 77.25 beds open. In phase one a total of 539 questionnaires were returned, representing 30.5% of episodes of patient bed-baths (based on 1767 bed occupancy and one bed-bath per patient per day). In 349 bed-bath episodes 54.7% patients were mechanically ventilated. The bed-bath was given between 02.00 and 06.00 h in 161 episodes (30%), took 15–30 min to complete (n = 195, 36.2%) and was completed within the last 8 h in 304 episodes (56.8%). Cleansing agents used were predominantly pH balanced soap or liquid soap and water (n = 379, 71%) in comparison to chlorhexidine impregnated sponges/cloths (n = 86, 16.1%) or other agents such as pre-packaged washcloths (n = 65, 12.2%). In 347 episodes (64.4%) emollients were not applied after the bed-bath. In phase two 12 FGs were conducted (three FGs at each ICU) with a total of 42 RN participants. Thematic analysis of FG transcripts across the three levels of RN ICU experience highlighted a transition of patient hygiene practice philosophy from shades of grey – falling in line for inexperienced clinicians to experienced clinicians concrete beliefs about patient bed-bath needs. Conclusions This study identified variation in process and products used in patient hygiene practices in four ICUs. Further study to improve patient outcomes is required to determine the appropriate timing of patient hygiene activities and cleansing agents used to improve skin integrity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to expose the impact of the shortage of senior academics,particularly professors, in Australian accounting schools, to relate the way one school addressed this shortage through a mentoring scheme, and to challenge existing institutional arrangements.----------- Design/methodology/approach: This is a contextualised qualitative case study of a mentoring scheme conducted in an Australian accounting school. Data collected from semi-structured interviews, personal reflections and from Australian university web sites are interpreted theoretically using the metaphor of a “green drought”.---------- Findings: The mentoring scheme achieved some notable successes, but raised many issues and challenges. Mentoring is a multifaceted investment in vocational endeavour and intellectual infrastructure, which will not occur unless creative means are developed over the long term to overcome current and future shortages of academic mentors.---------- Research limitations/implications: This is a qualitative case study, which, therefore, limits its generalisability. However, its contextualisation enables insights to be applied to the wider academic environment. ----------Practical implications: In the Australian and global academic environment, as accounting professors retire in greater numbers, new and creative ways of mentoring will need to be devised. The challenge will be to address longer term issues of academic sustainability, and not just to focus on short-term academic outcomes.---------- Originality/value: A mentoring scheme based on a collegial networking model of mentoring is presented as a means of enhancing academic endeavour through a creative short-term solution to a shortage of accounting professors. The paper exemplifies the theorising power of metaphor in a qualitative study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A range of interventions are being implemented in Australia to apprehend and deter drug driving behaviour, in particular the recent implementation of random roadside drug testing procedures in Queensland. Given this countermeasure has a strong deterrence foundation, it is of interest to determine whether deterrence-based perceptual factors are influencing this offending behaviour or whether self-reported drug driving is heavily dependent upon illicit substance consumption levels and past offending behaviour. This study involves a sample of Queensland motorists (N = 898) who completed a self-report questionnaire that collected a range of information, including drug driving and drug consumption practices, conviction history, and perceptual deterrence factors. The aim was to examine what factors influence current drug driving behaviours. Analysis of the collected data revealed that approximately 20% of participants reported drug driving at least once in the last six months. Overall, there was considerable variability in the respondents' perceptions regarding the certainty, severity and swiftness of legal sanctions, although the largest proportion of the sample did not consider such sanctions to be certain, severe or swift. In regard to predicting those who intended to drug drive again in the future, a combination of perceptual and behavioural-based factors were associated with such intentions. However, a closer examination revealed that behaviours, rather than perceptions, proved to have a greater level of influence on the current sample's future intentions to offend. This paper further outlines the major findings of the study and highlights that multi-modal interventions are most likely required to reduce the prevalence of drug driving on public roads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Becoming a Teacher is structured in five very readable sections. The introductory section addresses the nature of teaching and the importance of developing a sense of purpose for teaching in a 21st century classroom. It also introduces some key concepts that are explored throughout the volume according to the particular chapter focus of each part. For example, the chapters in Part 2 explore aspects of student learning and the learning environment and focus on how students develop and learn, learner motivation, developing self esteem and learning environments. The concepts developed in this section, such as human development, stages of learning, motivation, and self-concept are contextualised in terms of theories of cognitive development and theories of social, emotional and moral development. The author, Colin Marsh, draws on his extensive experience as an educator to structure the narrative of chapters in this part via checklists for observation, summary tables, sample strategies for teaching at specific stages of student development, and questions under the heading ‘your turn’. Case studies such as ‘How I use Piaget in my teaching’ make that essential link between theory and practice, something which pre-service teachers struggle with in the early phases of their university course. I was pleased to see that Marsh also explores the contentious and debated aspects of these theoretical frameworks to demonstrate that pre-service teachers must engage with and critique the ways in which theories about teaching and learning are applied. Marsh weaves in key quotations and important references into each chapter’s narrative and concludes every chapter with summary comments, reflection activities, lists of important references and useful web sources. As one would expect of a book published in 2008, Becoming a Teacher is informed by the most recent reports of classroom practice, current policy initiatives and research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, two different high bandwidth converter control strategies are discussed. One of the strategies is for voltage control and the other is for current control. The converter, in each of the cases, is equipped with an output passive filter. For the voltage controller, the converter is equipped with an LC filter, while an output has an LCL filter for current controller. The important aspect that has been discussed the paper is to avoid computation of unnecessary references using high-pass filters in the feedback loop. The stability of the overall system, including the high-pass filters, has been analyzed. The choice of filter parameters is crucial for achieving desirable system performance. In this paper, the bandwidth of achievable performance is presented through frequency (Bode) plot of the system gains. It has been illustrated that the proposed controllers are capable of tracking fundamental frequency components along with low-order harmonic components. Extensive simulation results are presented to validate the control concepts presented in the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.