963 resultados para BrdU incorporation
Resumo:
Background Both sorghum (Sorghum bicolor) and sugarcane (Saccharum officinarum) are members of the Andropogoneae tribe in the Poaceae and are each other's closest relatives amongst cultivated plants. Both are relatively recent domesticates and comparatively little of the genetic potential of these taxa and their wild relatives has been captured by breeding programmes to date. This review assesses the genetic gains made by plant breeders since domestication and the progress in the characterization of genetic resources and their utilization in crop improvement for these two related species. Genetic Resources The genome of sorghum has recently been sequenced providing a great boost to our knowledge of the evolution of grass genomes and the wealth of diversity within S. bicolor taxa. Molecular analysis of the Sorghum genus has identified close relatives of S. bicolor with novel traits, endosperm structure and composition that may be used to expand the cultivated gene pool. Mutant populations (including TILLING populations) provide a useful addition to genetic resources for this species. Sugarcane is a complex polyploid with a large and variable number of copies of each gene. The wild relatives of sugarcane represent a reservoir of genetic diversity for use in sugarcane improvement. Techniques for quantitative molecular analysis of gene or allele copy number in this genetically complex crop have been developed. SNP discovery and mapping in sugarcane has been advanced by the development of high-throughput techniques for ecoTILLING in sugarcane. Genetic linkage maps of the sugarcane genome are being improved for use in breeding selection. The improvement of both sorghum and sugarcane will be accelerated by the incorporation of more diverse germplasm into the domesticated gene pools using molecular tools and the improved knowledge of these genomes.
Resumo:
This research investigates the symbiotic relationship between composition and improvisation and the notion of improvisation itself. With a specific interest in developing, extending and experimenting with the relationship of improvisation within predetermined structures, the creative work component of this research involved composing six new works with varying approaches for The Andrea Keller Quartet and guest improvisers, for performance on a National Australian tour. This is documented in the CD recording Galumphing Round the Nation - Collaborations Tour 2009. The exegesis component is intended to run alongside the creative work and discusses the central issues surrounding improvisation in an ensemble context and the subject of composing for improvisers. Specifically, it questions the notion that when music emphasises a higher ratio of spontaneous to pre-determined elements, and is exposed to the many variables of a performance context, particularly through its incorporation of visitant improvisers, the resultant music should potentially be measurably altered with each performance. This practice-led research demonstrates the effect of concepts such as individuality, variability within context, and the interactive qualities of contemporary jazz ensemble music. Through the analysis and comparison of the treatment of the six pieces over thirteen performances with varying personnel, this exegesis proposes that, despite the expected potential for spontaneity in contemporary jazz music, the presence of established patterns, the desire for familiarity and the intuitive tendency towards accepted protocols ensure that the music which emerges is not as mutable as initially anticipated.
Resumo:
The concept of system use has suffered from a "too simplistic definition" (DeLone and McLean [9], p. 16). This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as "the level of incorporation of an information system within a user's processes." We then go on to develop the concept of a Functional Interface Point and four dimensions of system usage: automation level, the proportion of the business process encoded by the information system; extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; and thoroughness, the level of use of information/functionality provided by the system at an FIP. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.
Resumo:
In response to concerns about the quality of English Language Learning (ELL) education at tertiary level, the Chinese Ministry of Education (CMoE) launched the College English Reform Program (CERP) in 2004. By means of a press release (CMoE, 2005) and a guideline document titled College English Curriculum Requirements (CECR) (CMoE, 2007), the CERP proposed two major changes to the College English assessment policy, which were: (1) the shift to optional status for the compulsory external test, the College English Test Band 4 (CET4); and (2) the incorporation of formative assessment into the existing summative assessment framework. This study investigated the interactions between the College English assessment policy change, the theoretical underpinnings, and the assessment practices within two Chinese universities (one Key University and one Non-Key University). It adopted a sociocultural theoretical perspective to examine the implementation process as experienced by local actors of institutional and classroom levels. Systematic data analysis using a constant comparative method (Merriam, 1998) revealed that contextual factors and implementation issues did not lead to significant differences in the two cases. Lack of training in assessment and the sociocultural factors such as the traditional emphasis on the product of learning and hierarchical teacher/students relationship are decisive and responsible for the limited effect of the reform.
Resumo:
We report the photoinduced conjugation of polymers synthesized via reversible addition−fragmentation chain transfer (RAFT) polymerization with a number of low molecular weight (functional) olefins. Upon irradiation of a solution of an aliphatic alkene and the benzyl dithioacetic acid ester (CPDA) or dodecyl trithiocarbonate (DoPAT) functional poly(alkyl acrylate) at the absorption wavelength of the thiocarbonyl group (315 nm), incorporation of the alkene at the polymer chain-end occurred. The most efficient systems identified with regard to the rate of reaction and yield were poly(butyl acrylate)/CPDA/ethyl vinyl ether (78% monoinsertion product after 1 h) and poly(butyl acrylate)/CPDA/1-pentene (73% insertion product after 7 h) at ambient temperature. An in-depth analysis of the reaction mechanism by 1H NMR and online size-exclusion chromatography-electrospray ionization tandem mass spectrometry (SEC/ESI−MSn) revealed that a possible [2 + 2] photoaddition mechanism of conjugation does not take place. Instead, fast β-cleavage of the photoexcited RAFT-end group with subsequent radical addition of an alkene was observed for all employed systems. The presented reaction thus provides a means of spatial and temporal control for the conjugation of alkenes to thiocarbonyl thio-capped macromolecules via the use of UV radiation.
Resumo:
A series of polymers with a comb architecture were prepared where the poly(olefin sulfone) backbone was designed to be highly sensitive to extreme ultraviolet (EUV) radiation, while the well-defined poly(methyl methacrylate) (PMMA) arms were incorporated with the aim of increasing structural stability. It is hypothesized that upon EUV radiation rapid degradation of the polysulfone backbone will occur leaving behind the well-defined PMMA arms. The synthesized polymers were characterised and have had their performance as chain-scission EUV photoresists evaluated. It was found that all materials possess high sensitivity towards degradation by EUV radiation (E0 in the range 4–6 mJ cm−2). Selective degradation of the poly(1-pentene sulfone) backbone relative to the PMMA arms was demonstrated by mass spectrometry headspace analysis during EUV irradiation and by grazing-angle ATR-FTIR. EUV interference patterning has shown that materials are capable of resolving 30 nm 1:1 line:space features. The incorporation of PMMA was found to increase the structural integrity of the patterned features. Thus, it has been shown that terpolymer materials possessing a highly sensitive poly(olefin sulfone) backbone and PMMA arms are able to provide a tuneable materials platform for chain scission EUV resists. These materials have the potential to benefit applications that require nanopattering, such as computer chip manufacture and nano-MEMS.
Resumo:
Concerns raised in educational reports about school science in terms of students. outcomes and attitudes, as well as science teaching practices prompted investigation into science learning and teaching practices at the foundational level of school science. Without science content and process knowledge, understanding issues of modern society and active participation in decision-making is difficult. This study contended that a focus on the development of the language of science could enable learners to engage more effectively in learning science and enhance their interest and attitudes towards science. Furthermore, it argued that explicit teaching practices where science language is modelled and scaffolded would facilitate the learning of science by young children at the beginning of their formal schooling. This study aimed to investigate science language development at the foundational level of school science learning in the preparatory-school with students aged five and six years. It focussed on the language of science and science teaching practices in early childhood. In particular, the study focussed on the capacity for young students to engage with and understand science language. Previous research suggests that students have difficulty with the language of science most likely because of the complexities and ambiguities of science language. Furthermore, literature indicates that tensions transpire between traditional science teaching practices and accepted early childhood teaching practices. This contention prompted investigation into means and models of pedagogy for learning foundational science language, knowledge and processes in early childhood. This study was positioned within qualitative assumptions of research and reported via descriptive case study. It was located in a preparatory-school classroom with the class teacher, teacher-aide, and nineteen students aged four and five years who participated with the researcher in the study. Basil Bernstein.s pedagogical theory coupled with Halliday.s Systemic Functional Linguistics (SFL) framed an examination of science pedagogical practices for early childhood science learning. Students. science learning outcomes were gauged by focussing a Hallydayan lens on their oral and reflective language during 12 science-focussed episodes of teaching. Data were collected throughout the 12 episodes. Data included video and audio-taped science activities, student artefacts, journal and anecdotal records, semi-structured interviews and photographs. Data were analysed according to Bernstein.s visible and invisible pedagogies and performance and competence models. Additionally, Halliday.s SFL provided the resource to examine teacher and student language to determine teacher/student interpersonal relationships as well as specialised science and everyday language used in teacher and student science talk. Their analysis established the socio-linguistic characteristics that promoted science competencies in young children. An analysis of the data identified those teaching practices that facilitate young children.s acquisition of science meanings. Positive indications for modelling science language and science text types to young children have emerged. Teaching within the studied setting diverged from perceived notions of common early childhood practices and the benefits of dynamic shifting pedagogies were validated. Significantly, young students demonstrated use of particular specialised components of school-science language in terms of science language features and vocabulary. As well, their use of language demonstrated the students. knowledge of science concepts, processes and text types. The young students made sense of science phenomena through their incorporation of a variety of science language and text-types in explanations during both teacher-directed and independent situations. The study informs early childhood science practices as well as practices for foundational school science teaching and learning. It has exposed implications for science education policy, curriculum and practices. It supports other findings in relation to the capabilities of young students. The study contributes to Systemic Functional Linguistic theory through the development of a specific resource to determine the technicality of teacher language used in teaching young students. Furthermore, the study contributes to methodology practices relating to Bernsteinian theoretical perspectives and has demonstrated new ways of depicting and reporting teaching practices. It provides an analytical tool which couples Bernsteinian and Hallidayan theoretical perspectives. Ultimately, it defines directions for further research in terms of foundation science language learning, ongoing learning of the language of science and learning science, science teaching and learning practices, specifically in foundational school science, and relationships between home and school science language experiences.
Resumo:
Social outcomes, in particular intangible social outcomes, are generally difficult to achieve in the construction industry due to the predominantly episodic, fragmented and heavily regulated nature of construction that presupposes a tendency towards mainstream construction processes and design. The Western Australian ‘Percent for Art’ policy is recognized for stimulating social outcomes, by creating richer and more aesthetically pleasing social environments through the incorporation of artwork into public buildings. A case study of four Percent for Art projects highlights the role of the Artwork Selection Committee in incorporating artwork into construction. A total of 20 semi-structured interviews were conducted with committee members and policy officers. Data analysis involved a combination of pattern coding and matrix categorization, and resulted in the identification of the committee’s three key elements of collaborative communication, democratic decision-making and project champions. The findings suggest these key elements foster the interaction, communication and relationships needed to facilitate feedback, enhance relationships, create cross-functional teams and lower project resistance, which are all necessary to overcome constraints to social outcomes in construction. The findings provide greater insight into the mechanisms for achieving social outcomes and a basis for future discussion about the processes for achieving social outcomes in the construction industry.
Resumo:
This study uses and extends the theory of planned behavior to develop and empirically test a model of the social condition of riparian behavior. The theory of planned behavior is applicable to understanding the complexity of social conditions underlying waterway health. SEM identified complex interrelationships between variables. Aspects of respondent’s beliefs impacted on their stated intentions and behavior and were partially mediated by perceived behavioral control. The way in which people used waterways also influenced their actions. This study adds to theoretical knowledge through the development of scales that measure aspects of the social condition of waterways and examines their interrelationships for the first time. It extends the theory of planned behaviour through the incorporation of an objective measure of participants knowledge of waterway health. It also has practical implications for managers involved in sustaining and improving the social condition of river ecosystems.
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Low oxygen pressure (hypoxia) plays an important role in stimulating angiogenesis; there are, however, few studies to prepare hypoxia-mimicking tissue engineering scaffolds. Mesoporous bioactive glass (MBG) has been developed as scaffolds with excellent osteogenic properties for bone regeneration. Ionic cobalt (Co) is established as a chemical inducer of hypoxia-inducible factor (HIF)-1α, which induces hypoxia-like response. The aim of this study was to develop hypoxia-mimicking MBG scaffolds by incorporating ionic Co2+ into MBG scaffolds and investigate if the addition of Co2+ ions would induce a cellular hypoxic response in such a tissue engineering scaffold system. The composition, microstructure and mesopore properties (specific surface area, nano-pore volume and nano-pore distribution) of Co-containing MBG (Co-MBG) scaffolds were characterized and the cellular effects of Co on the proliferation, differentiation, vascular endothelial growth factor (VEGF) secretion, HIF-1α expression and bone-related gene expression of human bone marrow stromal cells (BMSCs) in MBG scaffolds were systematically investigated. The results showed that low amounts of Co (< 5%) incorporated into MBG scaffolds had no significant cytotoxicity and that their incorporation significantly enhanced VEGF protein secretion, HIF-1α expression, and bone-related gene expression in BMSCs, and also that the Co-MBG scaffolds support BMSC attachment and proliferation. The scaffolds maintain a well-ordered mesopore channel structure and high specific surface area and have the capacity to efficiently deliver antibiotics drugs; in fact, the sustained released of ampicillin by Co-MBG scaffolds gives them excellent anti-bacterial properties. Our results indicate that incorporating cobalt ions into MBG scaffolds is a viable option for preparing hypoxia-mimicking tissue engineering scaffolds and significantly enhanced hypoxia function. The hypoxia-mimicking MBG scaffolds have great potential for bone tissue engineering applications by combining enhanced angiogenesis with already existing osteogenic properties.
Supply chain sustainability : a relationship management approach moderated by culture and commitment
Resumo:
This paper explores the nature of relationship management on construction projects in Australia and examines the effects of culture, by means of Schwarz’s value survey, on relationships under different contract strategies. The research was based on the view that the development of a sustainable supply chain depends on the transfer of knowledge and capabilities from the larger players in the supply chain through collaboration brought about by relationship management. The research adopted a triangulated approach in which quantitative data were collected by questionnaire, interviews were conducted to explore and enrich the quantitative data and case studies were undertaken in order to illustrate and validate the fi ndings. The aim was to investigate how values and attitudes enhance or reduce the incorporation of the supply chain into the project. From the research it was found that the degree of match and mismatch between values and contract strategy impacts commitment and the engagement and empowerment of the supply chain.
Resumo:
DeLone and McLean (1992, p. 16) argue that the concept of “system use” has suffered from a “too simplistic definition.” Despite decades of substantial research on system use, the concept is yet to receive strong theoretical scrutiny. Many measures of system use and the development of measures have been often idiosyncratic and lack credibility or comparability. This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as “the level of incorporation of an information system within a user’s processes.” The definition is supported with the theory of work systems, system, and Key-User-Group considerations. We then go on to develop the concept of a Functional- Interface-Point (FIP) and four dimensions of system usage: extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; thoroughness, the level of use of information/functionality provided by the system at an FIP; and attitude towards use, a set of measures that assess the level of comfort, degree of respect and the challenges set forth by the system. The paper argues that the automation level, the proportion of the business process encoded by the information system has a mediating impact on system use. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.