954 resultados para stable-like processes
Resumo:
Assurance of learning is a predominant feature in both quality enhancement and assurance in higher education. Assurance of learning is a process that articulates explicit program outcomes and standards, and systematically gathers evidence to determine the extent to which performance matches expectations. Benefits accrue to the institution through the systematic assessment of whole of program goals. Data may be used for continuous improvement, program development, and to inform external accreditation and evaluation bodies. Recent developments, including the introduction of the Tertiary Education and Quality Standards Agency (TEQSA) will require universities to review the methods they use to assure learning outcomes. This project investigates two critical elements of assurance of learning: 1. the mapping of graduate attributes throughout a program; and 2. the collection of assurance of learning data. An audit was conducted with 25 of the 39 Business Schools in Australian universities to identify current methods of mapping graduate attributes and for collecting assurance of learning data across degree programs, as well as a review of the key challenges faced in these areas. Our findings indicate that external drivers like professional body accreditation (for example: Association to Advance Collegiate Schools of Business (AACSB)) and TEQSA are important motivators for assuring learning, and those who were undertaking AACSB accreditation had more robust assurance of learning systems in place. It was reassuring to see that the majority of institutions (96%) had adopted an embedding approach to assuring learning rather than opting for independent standardised testing. The main challenges that were evident were the development of sustainable processes that were not considered a burden to academic staff, and obtainment of academic buy in to the benefits of assuring learning per se rather than assurance of learning being seen as a tick box exercise. This cultural change is the real challenge in assurance of learning practice.
Resumo:
Boundaries are an important field of study because they mediate almost every aspect of organizational life. They are becoming increasingly more important as organizations change more frequently and yet, despite the endemic use of the boundary metaphor in common organizational parlance, they are poorly understood. Organizational boundaries are under-theorized and researchers in related fields often simply assume their existence, without defining them. The literature on organizational boundaries is fragmented with no unifying theoretical basis. As a result, when it is recognized that an organizational boundary is "dysfunctional". there is little recourse to models on which to base remediating action. This research sets out to develop just such a theoretical model and is guided by the general question: "What is the nature of organizational boundaries?" It is argued that organizational boundaries can be conceptualised through elements of both social structure and of social process. Elements of structure include objects, coupling, properties and identity. Social processes include objectification, identification, interaction and emergence. All of these elements are integrated by a core category, or basic social process, called boundary weaving. An organizational boundary is a complex system of objects and emergent properties that are woven together by people as they interact together, objectifying the world around them, identifying with these objects and creating couplings of varying strength and polarity as well as their own fragmented identity. Organizational boundaries are characterised by the multiplicity of interconnections, a particular domain of objects, varying levels of embodiment and patterns of interaction. The theory developed in this research emerged from an exploratory, qualitative research design employing grounded theory methodology. The field data was collected from the training headquarters of the New Zealand Army using semi-structured interviews and follow up observations. The unit of analysis is an organizational boundary. Only one research context was used because of the richness and multiplicity of organizational boundaries that were present. The model arose, grounded in the data collected, through a process of theoretical memoing and constant comparative analysis. Academic literature was used as a source of data to aid theory development and the saturation of some central categories. The final theory is classified as middle range, being substantive rather than formal, and is generalizable across medium to large organizations in low-context societies. The main limitation of the research arose from the breadth of the research with multiple lines of inquiry spanning several academic disciplines, with some relevant areas such as the role of identity and complexity being addressed at a necessarily high level. The organizational boundary theory developed by this research replaces the typology approaches, typical of previous theory on organizational boundaries and reconceptualises the nature of groups in organizations as well as the role of "boundary spanners". It also has implications for any theory that relies on the concept of boundaries, such as general systems theory. The main contribution of this research is the development of a holistic model of organizational boundaries including an explanation of the multiplicity of boundaries . no organization has a single definable boundary. A significant aspect of this contribution is the integration of aspects of complexity theory and identity theory to explain the emergence of higher-order properties of organizational boundaries and of organizational identity. The core category of "boundary weaving". is a powerful new metaphor that significantly reconceptualises the way organizational boundaries may be understood in organizations. It invokes secondary metaphors such as the weaving of an organization's "boundary fabric". and provides managers with other metaphorical perspectives, such as the management of boundary friction, boundary tension, boundary permeability and boundary stability. Opportunities for future research reside in formalising and testing the theory as well as developing analytical tools that would enable managers in organizations to apply the theory in practice.
Resumo:
Influenza is a widespread disease occurring in seasonal epidemics, and each year is responsible for up to 500,000 deaths worldwide. Influenza can develop into strains which cause severe symptoms and high mortality rates, and could potentially reach pandemic status if the virus’ properties allow easy transmission. Influenza is transmissible via contact with the virus, either directly (infected people) or indirectly (contaminated objects); via reception of large droplets over short distances (one metre or less); or through inhalation of aerosols containing the virus expelled by infected individuals during respiratory activities, that can remain suspended in the air and travel distances of more than one metre (the aerosol route). Aerosol transmission of viruses involves three stages: production of the droplets containing viruses; transport of the droplets and ability of a virus to remain intact and infectious; and reception of the droplets (via inhalation). Our understanding of the transmission of influenza viruses via the aerosol route is poor, and thus our ability to prevent a widespread outbreak is limited. This study explored the fate of viruses in droplets by investigating the effects of some physical factors on the recovery of both a bacteriophage model and influenza virus. Experiments simulating respiratory droplets were carried out using different types of droplets, generated from a commonly used water-like matrix, and also from an ‘artificial mucous’ matrix which was used to more closely resemble respiratory fluids. To detect viruses in droplets, we used the traditional plaque assay techniques, and also a sensitive, quantitative PCR assay specifically developed for this study. Our results showed that the artificial mucous suspension enhanced the recovery of infectious bacteriophage. We were able to report detection limits of infectious bacteriophage (no bacteriophage was detected by the plaque assay when aerosolised from a suspension of 103 PFU/mL, for three of the four droplet types tested), and that bacteriophage could remain infectious in suspended droplets for up to 20 minutes. We also showed that the nested real-time PCR assay was able to detect the presence of bacteriophage RNA where the plaque assay could not detect any intact particles. Finally, when applying knowledge from the bacteriophage experiments, we reported the quantitative recoveries of influenza viruses in droplets, which were more consistent and stable than we had anticipated. Influenza viruses can be detected up to 20 minutes (after aerosolisation) in suspended aerosols and possibly beyond. It also was detectable from nebulising suspensions with relatively low concentrations of viruses.
Resumo:
In the global knowledge economy, to attract and retain knowledge-intensive industries and workers, cities produce various development strategies. Such strategising is an important development mechanism for cities to complete their transformation into knowledge cities. This paper discusses the critical connections between knowledge city foundations and integrated knowledge-based urban development strategies, and scrutinises Brisbane’s strategies in attracting and retaining investment and talent. The paper introduces a knowledge-based urban development assessment framework and uses this framework to provide a clearer understanding of Brisbane’s knowledge-based development processes and knowledge city transformation experience. The assessment framework particularly focuses on examining Brisbane’s four development processes, institutional, economic, socio-cultural and urban development, in detail. The findings reveal that although Brisbane is still in early stages of its transformation into a fully-fledged knowledge city, global orientation and achievements of Brisbane in strategising knowledge-based urban development are noteworthy.
Resumo:
Six sigma has proven itself as a major quality initiative in the last two decades. It is a philosophy which provides a systematic approach to applying numerous tools in the framework of several quality improvement methodologies. The most widely used six sigma methodology is DMAIC, which is best suited for improving existing processes. In order to build quality into the product or service, a proactive approach like Design for Six Sigma (DFSS) is required. This paper provides an overview of DFSS, product innovation, and service innovation. The emphasis is on comparing how DFSS is applied differently in product and service innovation. This paper contributes by analysing the existing literature on DFSS in product and service innovation. The major findings are that the DFSS approach in services and products can be differentiated along the following three dimensions: methodology, characteristics, and technology.
Resumo:
Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.
Resumo:
Muscle physiologists often describe fatigue simply as a decline of muscle force and infer this causes an athlete to slow down. In contrast, exercise scientists describe fatigue during sport competition more holistically as an exercise-induced impairment of performance. The aim of this review is to reconcile the different views by evaluating the many performance symptoms/measures and mechanisms of fatigue. We describe how fatigue is assessed with muscle, exercise or competition performance measures. Muscle performance (single muscle test measures) declines due to peripheral fatigue (reduced muscle cell force) and/or central fatigue (reduced motor drive from the CNS). Peak muscle force seldom falls by >30% during sport but is often exacerbated during electrical stimulation and laboratory exercise tasks. Exercise performance (whole-body exercise test measures) reveals impaired physical/technical abilities and subjective fatigue sensations. Exercise intensity is initially sustained by recruitment of new motor units and help from synergistic muscles before it declines. Technique/motor skill execution deviates as exercise proceeds to maintain outcomes before they deteriorate, e.g. reduced accuracy or velocity. The sensation of fatigue incorporates an elevated rating of perceived exertion (RPE) during submaximal tasks, due to a combination of peripheral and higher CNS inputs. Competition performance (sport symptoms) is affected more by decision-making and psychological aspects, since there are opponents and a greater importance on the result. Laboratory based decision making is generally faster or unimpaired. Motivation, self-efficacy and anxiety can change during exercise to modify RPE and, hence, alter physical performance. Symptoms of fatigue during racing, team-game or racquet sports are largely anecdotal, but sometimes assessed with time-motion analysis. Fatigue during brief all-out racing is described biomechanically as a decline of peak velocity, along with altered kinematic components. Longer sport events involve pacing strategies, central and peripheral fatigue contributions and elevated RPE. During match play, the work rate can decline late in a match (or tournament) and/or transiently after intense exercise bursts. Repeated sprint ability, agility and leg strength become slightly impaired. Technique outcomes, such as velocity and accuracy for throwing, passing, hitting and kicking, can deteriorate. Physical and subjective changes are both less severe in real rather than simulated sport activities. Little objective evidence exists to support exercise-induced mental lapses during sport. A model depicting mind-body interactions during sport competition shows that the RPE centre-motor cortex-working muscle sequence drives overall performance levels and, hence, fatigue symptoms. The sporting outputs from this sequence can be modulated by interactions with muscle afferent and circulatory feedback, psychological and decision-making inputs. Importantly, compensatory processes exist at many levels to protect against performance decrements. Small changes of putative fatigue factors can also be protective. We show that individual fatigue factors including diminished carbohydrate availability, elevated serotonin, hypoxia, acidosis, hyperkalaemia, hyperthermia, dehydration and reactive oxygen species, each contribute to several fatigue symptoms. Thus, multiple symptoms of fatigue can occur simultaneously and the underlying mechanisms overlap and interact. Based on this understanding, we reinforce the proposal that fatigue is best described globally as an exercise-induced decline of performance as this is inclusive of all viewpoints.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
This study examined if organizational identification can account for the mechanisms by which two-change management practices (communication and participation) influence employees’ intentions to support change. The context was a sample of 82 hotel employees in the early stages of a re-brand. Identification with the new hotel fully mediated the relationship between communication and adaptive and proactive intentions to support change, as well as between participation and proactive intentions.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
The majority of Australians will work, sleep and die in the garments of the mass market. Yet, as Ian Griffiths has termed it, the designers of these garments are ‘invisible’ (2000). To the general public, the values, opinions and individual design processes of these designers are as unknown as their names. However, the designer’s role is crucial in making decisions which will have impacts throughout the life of the garment. The high product volume within the mass market ensures that even a small decision in the design process to source a particular fabric, or to use a certain trim or textile finish, can have a profound environmental or social effect. While big companies in Australia have implemented some visible strategies for sustainability, it is uncertain how these may have flowed through to design practices. To explore this question, this presentation will discuss preliminary findings from in-depth semi-structured interviews with Australian mass market fashion designers and product developers. The aim of the interviews was to hear the voice of the insider – to listen to mass market designers describe their design process, discuss the Australian fashion industry and its future challenges and opportunities, and to comment on what a ‘sustainability’ for their industry could look like. These interviews will be discussed within the framework of design philosopher Tony Fry’s writing on design redirection for sustainability.
Resumo:
Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
Is it possible to control identities using performance management systems (PMSs)? This paper explores the theoretical fusion of management accounting and identity studies, providing a synthesised view of control, PMSs and identification processes. It argues that the effective use of PMSs generates a range of obtrusive mechanistic and unobtrusive organic controls that mediate identification processes to achieve a high level of identity congruency between individuals and collectives—groups and organisations. This paper contends that mechanistic control of PMSs provides sensebreaking effects and also creates structural conditions for sensegiving in top-down identification processes. These processes encourage individuals to continue the bottom-up processes of sensemaking, enacting identity and constructing identity narratives. Over time, PMS activities and conversations periodically mediate several episode(s) of identification to connect past, current and future identities. To explore this relationship, the dual locus of control—collectives and individuals—is emphasised to explicate their interplay. This multidisciplinary approach contributes to explaining the multidirectional effects of PMSs in obtrusive as well as unobtrusive ways, in order to control the nature of collectives and individuals in organisations.