562 resultados para standards-based reforms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of designing a surveillance system to detect a broad range of invasive species across a heterogeneous sampling frame. We present a model to detect a range of invertebrate invasives whilst addressing the challenges of multiple data sources, stratifying for differential risk, managing labour costs and providing sufficient power of detection.We determine the number of detection devices required and their allocation across the landscape within limiting resource constraints. The resulting plan will lead to reduced financial and ecological costs and an optimal surveillance system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standards are designed to promote the interoperability of products and systems by enabling different parties to develop technologies that can be used together. There is an increasing expectation in many technical communities, including open source communities, that standards will be ‘open’. However, standards are subject to legal rights which impact upon, not only their development, but also their implementation. Of central importance are intellectual property rights: technical standards may incorporate patented technologies, while the specification documents of standards are protected by copyright. This article provides an overview of the processes by which standards are developed and considers the concept of ‘interoperability’, the meaning of the term ‘open standard’ and how open standards contribute to interoperability. It explains how intellectual property rights operate in relation to standards and how they can be managed to create standards that are open, not only during their development, but also in implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Texture based techniques for visualisation of unsteady vector fields have been applied for the visualisation of a Finite volume model for variably saturated groundwater flow through porous media. This model has been developed by staff in the School of Mathematical Sciences QUT for the study of salt water intrusion into coastal aquifers. This presentation discusses the implementation and effectiveness of the IBFV algorithm in the context of visualisation of the groundwater simulation outputs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The establishment of corporate objectives regarding economic, environmental, social, and ethical responsibilities, to inform business practice, has been gaining credibility in the business sector since the early 1990’s. This is witnessed through (i) the formation of international forums for sustainable and accountable development, (ii) the emergence of standards, systems, and frameworks to provide common ground for regulatory and corporate dialogue, and (iii) the significant quantum of relevant popular and academic literature in a diverse range of disciplines. How then has this move towards greater corporate responsibility become evident in the provision of major urban infrastructure projects? The gap identified, in both academic literature and industry practice, is a structured and auditable link between corporate intent and project outcomes. Limited literature has been discovered which makes a link between corporate responsibility; project performance indicators (or critical success factors) and major infrastructure provision. This search revealed that a comprehensive mapping framework, from an organisation’s corporate objectives through to intended, anticipated and actual outcomes and impacts has not yet been developed for the delivery of such projects. The research problem thus explored is ‘the need to better identify, map and account for the outcomes, impacts and risks associated with economic, environmental, social and ethical outcomes and impacts which arise from major economic infrastructure projects, both now, and into the future’. The methodology being used to undertake this research is based on Checkland’s soft system methodology, engaging in action research on three collaborative case studies. A key outcome of this research is a value-mapping framework applicable to Australian public sector agencies. This is a decision-making methodology which will enable project teams responsible for delivering major projects, to better identify and align project objectives and impacts with stated corporate objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the research and development of an ICT tool to facilitate the learning of ratio and fractions by adult prisoners. The design of the ICT tool was informed by a semiotic framework for mathematical meaning-making. The ICT tool thus employed multiple semiotic resources including topological, typological, and social-actional resources. The results showed that individual semiotic resource could only represent part of the mathematical concept, while at the same time it might signify something else to create a misconception. When multiple semiotic resources were utilised the mathematical ideas could be better learnt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past decade, plants have been used as expression hosts for the production of pharmaceutically important and commercially valuable proteins. Plants offer many advantages over other expression systems such as lower production costs, rapid scale up of production, similar post-translational modification as animals and the low likelihood of contamination with animal pathogens, microbial toxins or oncogenic sequences. However, improving recombinant protein yield remains one of the greatest challenges to molecular farming. In-Plant Activation (InPAct) is a newly developed technology that offers activatable and high-level expression of heterologous proteins in plants. InPAct vectors contain the geminivirus cis elements essential for rolling circle replication (RCR) and are arranged such that the gene of interest is only expressed in the presence of the cognate viral replication-associated protein (Rep). The expression of Rep in planta may be controlled by a tissue-specific, developmentally regulated or chemically inducible promoter such that heterologous protein accumulation can be spatially and temporally controlled. One of the challenges for the successful exploitation of InPAct technology is the control of Rep expression as even very low levels of this protein can reduce transformation efficiency, cause abnormal phenotypes and premature activation of the InPAct vector in regenerated plants. Tight regulation over transgene expression is also essential if expressing cytotoxic products. Unfortunately, many tissue-specific and inducible promoters are unsuitable for controlling expression of Rep due to low basal activity in the absence of inducer or in tissues other than the target tissue. This PhD aimed to control Rep activity through the production of single chain variable fragments (scFvs) specific to the motif III of Tobacco yellow dwarf virus (TbYDV) Rep. Due to the important role played by the conserved motif III in the RCR, it was postulated that such scFvs can be used to neutralise the activity of the low amount of Rep expressed from a “leaky” inducible promoter, thus preventing activation of the TbYDV-based InPAct vector until intentional induction. Such scFvs could also offer the potential to confer partial or complete resistance to TbYDV, and possibly heterologous viruses as motif III is conserved between geminiviruses. Studies were first undertaken to determine the levels of TbYDV Rep and TbYDV replication-associated protein A (RepA) required for optimal transgene expression from a TbYDV-based InPAct vector. Transient assays in a non-regenerable Nicotiana tabacum (NT-1) cell line were undertaken using a TbYDV-based InPAct vector containing the uidA reporter gene (encoding GUS) in combination with TbYDV Rep and RepA under the control of promoters with high (CaMV 35S) or low (Banana bunchy top virus DNA-R, BT1) activity. The replication enhancer protein of Tomato leaf curl begomovirus (ToLCV), REn, was also used in some co-bombardment experiments to examine whether RepA could be substituted by a replication enhancer from another geminivirus genus. GUS expression was observed both quantitatively and qualitatively by fluorometric and histochemical assays, respectively. GUS expression from the TbYDV-based InPAct vector was found to be greater when Rep was expected to be expressed at low levels (BT1 promoter) rather than high levels (35S promoter). GUS expression was further enhanced when Rep and RepA were co-bombarded with a low ratio of Rep to RepA. Substituting TbYDV RepA with ToLCV REn also enhanced GUS expression but more importantly highest GUS expression was observed when cells were co-transformed with expression vectors directing low levels of Rep and high levels of RepA irrespective of the level of REn. In this case, GUS expression was approximately 74-fold higher than that from a non-replicating vector. The use of different terminators, namely CaMV 35S and Nos terminators, in InPAct vectors was found to influence GUS expression. In the presence of Rep, GUS expression was greater using pInPActGUS-Nos rather than pInPActGUS-35S. The only instance of GUS expression being greater from vectors containing the 35S terminator was when comparing expression from cells transformed with Rep, RepA and REnexpressing vectors and either non-replicating vectors, p35SGS-Nos or p35SGS-35S. This difference was most likely caused by an interaction of viral replication proteins with each other and the terminators. These results indicated that (i) the level of replication associated proteins is critical to high transgene expression, (ii) the choice of terminator within the InPAct vector may affect expression levels and (iii) very low levels of Rep can activate InPAct vectors hence controlling its activity is critical. Prior to generating recombinant scFvs, a recombinant TbYDV Rep was produced in E. coli to act as a control to enable the screening for Rep-specific antibodies. A bacterial expression vector was constructed to express recombinant TbYDV Rep with an Nterminal His-tag (N-His-Rep). Despite investigating several purification techniques including Ni-NTA, anion exchange, hydrophobic interaction and size exclusion chromatography, N-His-Rep could only be partially purified using a Ni-NTA column under native conditions. Although it was not certain that this recombinant N-His-Rep had the same conformation as the native TbYDV Rep and was functional, results from an electromobility shift assay (EMSA) showed that N-His-Rep was able to interact with the TbYDV LIR and was, therefore, possibly functional. Two hybridoma cell lines from mice, immunised with a synthetic peptide containing the TbYDV Rep motif III amino acid sequence, were generated by GenScript (USA). Monoclonal antibodies secreted by the two hybridoma cell lines were first screened against denatured N-His-Rep in Western analysis. After demonstrating their ability to bind N-His-Rep, two scFvs (scFv1 and scFv2) were generated using a PCR-based approach. Whereas the variable heavy chain (VH) from both cell lines could be amplified, only the variable light chain (VL) from cell line 2 was amplified. As a result, scFv1 contained VH and VL from cell line 1, whereas scFv2 contained VH from cell line 2 and VL from cell line 1. Both scFvs were first expressed in E. coli in order to evaluate their affinity to the recombinant TbYDV N-His-Rep. The preliminary results demonstrated that both scFvs were able to bind to the denatured N-His-Rep. However, EMSAs revealed that only scFv2 was able to bind to native N-His-Rep and prevent it from interacting with the TbYDV LIR. Each scFv was cloned into plant expression vectors and co-bombarded into NT-1 cells with the TbYDV-based InPAct GUS expression vector and pBT1-Rep to examine whether the scFvs could prevent Rep from mediating RCR. Although it was expected that the addition of the scFvs would result in decreased GUS expression, GUS expression was found to slightly increase. This increase was even more pronounced when the scFvs were targeted to the cell nucleus by the inclusion of the Simian virus 40 large T antigen (SV40) nuclear localisation signal (NLS). It was postulated that the scFvs were binding to a proportion of Rep, leaving a small amount available to mediate RCR. The outcomes of this project provide evidence that very high levels of recombinant protein can theoretically be expressed using InPAct vectors with judicious selection and control of viral replication proteins. However, the question of whether the scFvs generated in this project have sufficient affinity for TbYDV Rep to prevent its activity in a stably transformed plant remains unknown. It may be that other scFvs with different combinations of VH and VL may have greater affinity for TbYDV Rep. Such scFvs, when expressed at high levels in planta, might also confer resistance to TbYDV and possibly heterologous geminiviruses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to develop an effective numerical simulation technique for the dynamic deflection analysis of nanotubes-based nanoswitches. The nanoswitch is simplified to a continuum structure, and some key material parameters are extracted from typical molecular dynamics (MD). An advanced local meshless formulation is applied to obtain the discretized dynamic equations for the numerical solution. The developed numerical technique is firstly validated by the static deflection analyses of nanoswitches, and then, the fundamental dynamic properties of nanoswitches are analyzed. A parametric comparison with the results in the literature and from experiments shows that the developed modelling approach is accurate, efficient and effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern machines are complex and often required to operate long hours to achieve production targets. The ability to detect symptoms of failure, hence, forecasting the remaining useful life of the machine is vital to prevent catastrophic failures. This is essential to reducing maintenance cost, operation downtime and safety hazard. Recent advances in condition monitoring technologies have given rise to a number of prognosis models that attempt to forecast machinery health based on either condition data or reliability data. In practice, failure condition trending data are seldom kept by industries and data that ended with a suspension are sometimes treated as failure data. This paper presents a novel approach of incorporating historical failure data and suspended condition trending data in the prognostic model. The proposed model consists of a FFNN whose training targets are asset survival probabilities estimated using a variation of Kaplan-Meier estimator and degradation-based failure PDF estimator. The output survival probabilities collectively form an estimated survival curve. The viability of the model was tested using a set of industry vibration data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a consequence of the increased incidence of collaborative arrangements between firms, the competitive environment characterising many industries has undergone profound change. It is suggested that rivalry is not necessarily enacted by individual firms according to the traditional mechanisms of direct confrontation in factor and product markets, but rather as collaborative orchestration between a number of participants or network members. Strategic networks are recognised as sets of firms within an industry that exhibit denser strategic linkages among themselves than other firms within the same industry. Based on this, strategic networks are determined according to evidence of strategic alliances between firms comprising the industry. As a result, a single strategic network represents a group of firms closely linked according to collaborative ties. Arguably, the collective outcome of these strategic relationships engineered between firms suggest that the collaborative benefits attributed to interorganisational relationships require closer examination in respect to their propensity to influence rivalry in intraindustry environments. Derived in large from the social sciences, network theory allows for the micro and macro examination of the opportunities and constraints inherent in the structure of relationships in strategic networks, establishing a relational approach upon which the conduct and performance of firms can be more fully understood. Research to date has yet to empirically investigate the relationship between strategic networks and rivalry. The limited research that has been completed utilising a network rationale to investigate competitive patterns in contemporary industry environments has been characterised by a failure to directly measure rivalry. Further, this prior research has typically embedded investigation in industry settings dominated by technological or regulatory imperatives, such as the microprocessor and airline industries. These industries, due to the presence of such imperatives, are arguably more inclined to support the realisation of network rivalry, through subscription to prescribed technological standards (eg., microprocessor industry) or by being bound by regulatory constraints dictating operation within particular market segments (airline industry). In order to counter these weaknesses, the proposition guiding research - Are patterns of rivalry predicted by strategic network membership? – is embedded in the United States Light Vehicles Industry, an industry not dominated by technological or regulatory imperatives. Further, rivalry is directly measured and utilised in research, thus distinguishing this investigation from prior research efforts. The timeframe of investigation is 1993 – 1999, with all research data derived from secondary sources. Strategic networks were defined within the United States Light Vehicles Industry based on evidence of horizontal strategic relationships between firms comprising the industry. The measure of rivalry used to directly ascertain the competitive patterns of industry participants was derived from the traditional Herfindahl Index, modified to account for patterns of rivalry observed at the market segment level. Statistical analyses of the strategic network and rivalry constructs found little evidence to support the contention of network rivalry; indeed, greater levels of rivalry were observed between firms comprising the same strategic network than between firms participating in opposing network structures. Based on these results, patterns of rivalry evidenced in the United States Light Vehicle Industry over the period 1993 – 1999 were not found to be predicted by strategic network membership. The findings generated by this research are in contrast to current theorising in the strategic network – rivalry realm. In this respect, these findings are surprising. The relevance of industry type, in conjunction with prevailing network methodology, provides the basis upon which these findings are contemplated. Overall, this study raises some important questions in relation to the relevancy of the network rivalry rationale, establishing a fruitful avenue for further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIM-compatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIMcompatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The researcher’s professional role as an Education Officer was the impetus for this study. Designing and implementing professional development activities is a significant component of the researcher’s position description and as a result of reflection and feedback from participants and colleagues, the creation of a more effective model of professional development became the focus for this study. Few studies have examined all three links between the purposes of professional development that is, increasing teacher knowledge, improving teacher practice, and improving student outcomes. This study is significant in that it investigates the nature of the growth of teachers who participated in a model of professional development which was based upon the principles of Lesson Study. The research provides qualitative and empirical data to establish some links between teacher knowledge, teacher practice, and student learning outcomes. Teacher knowledge in this study refers to mathematics content knowledge as well as pedagogical-content knowledge. The outcomes for students include achievement outcomes, attitudinal outcomes, and behavioural outcomes. As the study was conducted at one school-site, existence proof research was the focus of the methodology and data collection. Developing over the 2007 school year, with five teacher-participants and approximately 160 students from Year Levels 6 to 9, the Lesson Study-principled model of professional development provided the teacher-participants with on-site, on-going, and reflective learning based on their classroom environment. The focus area for the professional development was strategising the engagement with and solution of worded mathematics problems. A design experiment was used to develop the professional development as an intervention of prevailing teacher practice for which data were collected prior to and after the period of intervention. A model of teacher change was developed as an underpinning framework for the development of the study, and was useful in making decisions about data collection and analyses. Data sources consisted of questionnaires, pre-tests and post-tests, interviews, and researcher observations and field notes. The data clearly showed that: content knowledge and pedagogical-content knowledge were increased among the teacher-participants; teacher practice changed in a positive manner; and that a majority of students demonstrated improved learning outcomes. The positive changes to teacher practice are described in this study as the demonstrated use of mixed pedagogical practices rather than a polarisation to either traditional pedagogical practices or contemporary pedagogical practices. The improvement in student learning outcomes was most significant as improved achievement outcomes as indicated by the comparison of pre-test and post-test scores. The effectiveness of the Lesson Study-principled model of professional development used in this study was evaluated using Guskey’s (2005) Five Levels of Professional Development Evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.