975 resultados para Second Nature
Resumo:
This paper gives a modification of a class of stochastic Runge–Kutta methods proposed in a paper by Komori (2007). The slight modification can reduce the computational costs of the methods significantly.
Resumo:
An ability to recognise and resolve ethical dilemmas was identified by the Australian Law Reform Commission as one of the ten fundamental lawyering skills. While the ‘Priestley 11’ list of areas of law required to qualify for legal practice includes ethics and professional responsibility, the commitment to ethics learning in Australian law schools has been far from uniform. The obligation imposed by the Priestley 11 is frequently discharged by a traditional teaching and learning approach involving lectures and/or tutorials and focusing on the content of the formal rules of professional responsibility. However, the effectiveness of such an approach is open to question. Instead, a practical rather than a theoretical approach to the teaching of legal ethics is required. Effective final-year student learning of ethics may be achieved by an approach which engages students, enabling them to appreciate the relevance of what they are learning to the real world and facilitating their transition from study to their working lives. Entry into Valhalla comprises a suite of modules featuring ‘machinima’ (computer-generated imagery) created using the Second Life virtual environment to contextualise otherwise abstract concepts. It provides an engaging learning environment which enables students to obtain an appreciation of ethical responsibility in a real-world context and facilitates understanding and problem-solving ability.
Resumo:
Objectives. To profile Australian nurse practitioners and their practice in 2009 and compare results with a similar 2007 census. Methods. Self-administered questionnaire. Results. Atotal of 293 nurse practitioners responded (response rate 76.3%). The majority were female (n = 229, 81.2%); mean age was 47.3 years (s.d. = 8.1). As in 2007, emergency nurse practitioners represented the largest clinical specialty (n = 63, 30.3%). A majority practiced in a metropolitan area (n = 133, 64.3%); a decrease from 2007. Consistent with 2007, only 71.5% (n = 208) were employed as a nurse practitioner and 22.8% (n = 46) were awaiting approval for some or all of their clinical protocols. Demographic data, allocations of tasks, and patterns of practice remained consistent with 2007 results. ‘No Medicare provider number’ (n = 182, 91.0%), ‘no authority to prescribe using the Pharmaceutical Benefits Scheme’ (n = 182, 89.6%) and ‘lack of organisational support’ (n = 105, 52.2%) were reported as ‘limiting’ or ‘extremely limiting’ to practice. Conclusions. Our results demonstrate less than satisfactory uptake of the nurse practitioner role despite authorisation. Barriers constraining nurse practitioner practice reduced but remained unacceptably high. Adequate professional and political support is necessary to ensure the efficacy and sustainability of this clinical role.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting geno-centric or environmentalist positions, with an overriding focus on operational issues. In this thesis, the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multidisciplinary and integrative science focus is necessary, along with the development of a comprehensive multidisciplinary theoretical rationale. Dynamical systems theory is utilised as a multidisciplinary theoretical rationale for the succession of studies, capturing how multiple interacting constraints can shape the development of expert performers. Phase I of the research examines experiential knowledge of coaches and players on the development of fast bowling talent utilising qualitative research methodology. It provides insights into the developmental histories of expert fast bowlers, as well as coaching philosophies on the constraints of fast bowling expertise. Results suggest talent development programmes should eschew the notion of common optimal performance models and emphasize the individual nature of pathways to expertise. Coaching and talent development programmes should identify the range of interacting constraints that impinge on the performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms. Phase II of this research comprises three further studies that investigate several of the key components identified as important for fast bowling expertise, talent identification and development extrapolated from Phase I of this research. This multidisciplinary programme of work involves a comprehensive analysis of fast bowling performance in a cross-section of the Cricket Australia high performance pathways, from the junior, emerging and national elite fast bowling squads. Briefly, differences were found in trunk kinematics associated with the generation of ball speed across the three groups. These differences in release mechanics indicated the functional adaptations in movement patterns as bowlers’ physical and anatomical characteristics changed during maturation. Second to the generation of ball speed, the ability to produce a range of delivery types was highlighted as a key component of expertise in the qualitative phase. The ability of athletes to produce consistent results on different surfaces and in different environments has drawn attention to the challenge of measuring consistency and flexibility in skill assessments. Examination of fast bowlers in Phase II demonstrated that national bowlers can make adjustments to the accuracy of subsequent deliveries during performance of a cricket bowling skills test, and perform a range of delivery types with increased accuracy and consistency. Finally, variability in selected delivery stride ground reaction force components in fast bowling revealed the degenerate nature of this complex multi-articular skill where the same performance outcome can be achieved with unique movement strategies. Utilising qualitative and quantitative methodologies to examine fast bowling expertise, the importance of degeneracy and adaptability in fast bowling has been highlighted alongside learning design that promotes dynamic learning environments.
Resumo:
This thesis explores the business environment for self-publishing musicians at the end of the 20th century and the start of the 21st century from theoretical and empirical standpoints. The exploration begins by asking three research questions: what are the factors affecting the sustainability of an Independent music business; how many of those factors can be directly influenced by an Independent musician in the day-to-day operations of their musical enterprise; and how can those factors be best manipulated to maximise the benefit generated from digital music assets? It answers these questions by considering the nature of value in the music business in light of theories of political economy, then quantitative and qualitative examinations of the nature of participation in the music business, and then auto-ethnographic approaches to the application of two technologically enabled tools available to Independent musicians. By analyzing the results of five different examinations of the topic it answers each research question with reference to four sets of recurring issues that affect the operations of a 21st century music business: the musicians’ personal characteristics, their ability to address their business’s informational needs; their ability to manage the relationships upon which their business depends; and their ability to resolve the remaining technological problems that confront them. It discusses ways in which Independent self-publishing musicians can and cannot deal with these four issues on a day-to-day basis and highlights aspects for which technological solutions do not exist as well as ways in which technology is not as effective as has been claimed. It then presents a self-critique and proposes some directions for further study before concluding by suggesting some common features of 21st century Independent music businesses. This thesis makes three contributions to knowledge. First, it provides a new understanding of the sources of musical value, shows how this explains changes in the music industries over the past 30 years, and provides a framework for predicting future developments in those industries. Second, it shows how the technological discontinuity that has occurred around the start of the 21st century has and has not affected the production and distribution of digital cultural artefacts and thus the attitudes, approaches, and business prospects of Independent musicians. Third, it argues for new understandings of two methods by which self-publishing musicians can grow a business using production methods that are only beginning to be more broadly understood: home studio recording and fan-sourced production. Developed from the perspective of working musicians themselves, this thesis identifies four sets of issues that determine the probable success of musicians’ efforts to adopt new technologies to capture the value of the musicians’ creativity and thereby foster growth that will sustain an Independent music business in the 21st century.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Resilient organised crime groups survive and prosper despite law enforcement activity, criminal competition and market forces. Corrupt police networks, like any other crime network, must contain resiliency characteristics if they are to continue operation and avoid being closed down through detection and arrest of their members. This paper examines the resilience of a large corrupt police network, namely The Joke which operated in the Australian state of Queensland for a number of decades. The paper uses social network analysis tools to determine the resilient characteristics of the network. This paper also assumes that these characteristics will be different to those of mainstream organised crime groups because the police network operates within an established policing agency rather than as an independent entity hiding within the broader community.
Resumo:
This chapter focuses on two challenges to science teachers’ knowledge that Fensham identifies as having recently emerged—one a challenge from beyond Science and the other a challenge from within Science. Both challenges stem from common features of contemporary society, namely, its complexity and uncertainty. Both also confront science teachers with teaching situations that contrast markedly with the simplicity and certainty that have been characteristic of most school science education, and hence both present new demands for science teachers’ knowledge and skill. The first, the challenge from without Science, comes from the new world of work and the “knowledge society”. Regardless of their success in traditional school learning, many young persons in many modern economies are now seen as lacking other knowledge and skills that are essential for their personal, social and economic life. The second, the challenge from within Science, derives from changing notions of the nature of science itself. If the complexity and uncertainty of the knowledge society demand new understandings and contributions from science teachers, these are certainly matched by the demands that are posed by the role of complexity and uncertainty in science itself.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Purpose – This study examines the nature of consumers’ perceptions of the value they derive from the everyday experiential consumption of mobile phones and how mobile marketing (m-marketing) can potentially enhance these value perceptions. Methodology – Q methodology is used to examine how consumers’ subjective perceptions and opinions are shared at a collective level. Forty participants undertook two Q sorts and the data was analysed using PQ-method. Findings – The first Q sort identified three profiles of perceived value: the Mobile Pragmatists, the Mobile Connectors and the Mobile Revellers. The second Q sort identified two profiles of perceived value of m-marketing: one emerging from the shared opinions of the Mobile Pragmatists and the Mobile Connectors, and the second from the Mobile Revellers. Implications/limitations – The findings show how consumers can be segmented based on their contextualised perceived value of consuming mobile phones and how the potential for m-marketing is perceived in ways that can enhance these value perceptions. Limitations relate to deriving statements for the Q sorts and the generalisability of the results. Practical implications – The findings highlight ways to tailor m-marketing strategies to complement consumers’ perceptions of the value offered through their mobile phones. Originality/value of paper – The study contributes to the literature through using Q methodology to examine two subjective areas of consumer behaviour, experiential consumption and consumer perceived value. Keywords mobile phones, mobile phone marketing, consumer perceived value, Q methodology, experiential consumption Classification Research paper
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
Organisational commitment is extensively represented in the human resource management and organisational behaviour literature as a key factor in the relationship between employees and their organisations. Although Allen and Meyer (1990) noted that an employee can experience the three components of organisational commitment simultaneously, in terms of commitment profiles, the majority of studies have looked at the antecedents and outcomes of affective, continuance, and normative commitment independently. There is still only limited research that has investigated the nature of commitment profiles and their implications for employee work-related behaviours (Gellatly Meyer and Luchak 2006; Herscovitch & Meyer 2002; Somers 2010; Wasti 2005). An area where the commitment profiles approach potentially provides new insights is on the nature of normative commitment.
Resumo:
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners’ understanding of spoken language. This paper presents an argument for rethinking the emphasis on comprehension and advocates augmenting current teaching with an explicit focus on strategies. Drawing on the literature, the paper provides three models of strategy instruction for the teaching and development of listening skills. The models include steps for implementation that accord with their respective approaches to explicit instruction. The final section of the paper synthesises key points from the models as a guide for application in the second language classroom. The premise underpinning the paper is that the teaching of strategies can provide learners with active and explicit measures for managing and expanding their listening capacities, both in the learning and ‘real world’ use of a second language.
Resumo:
This chapter reports on a study of oracy in a first-year university Business course, with particular interest in the oracy demands for second language-using international students. The research is relevant at a time when Higher Education is characterised by the confluence of increased international enrolments, more dialogic teaching and learning, and imperatives for teamwork and collaboration. Data sources for the study included videotaped lectures and tutorials, course documents, student surveys, and an interview with the lecturer. The findings pointed to a complex, oracy-laden environment where interactive talk fulfilled high-stakes functions related to social inclusion, the co-construction of knowledge, and the accomplishment of assessment tasks. The salience of talk posed significant challenges for students negotiating these core functions in their second language. The study highlights the oracy demands in university courses and foregrounds the need for university teachers, curriculum writers and speaking test developers to recognise these demands and explicate them for the benefit of all students.