988 resultados para Schur complement
Resumo:
‘Stepping out into the real world of Education’ has been written to complement ‘Transitioning to the real world of Education’ (Millwater & Beutel, 2008). Both books are aimed at strategising the transition you are experiencing, from preservice teacher to professional educator, through issues that you will face as early career teachers from any specialist teaching strand - early childhood, primary, middle or secondary. ‘Transitioning to the real world of Education’ (Millwater & Beutel, 2008) addressed the particularities and practicalities of professional standards, life-long learning, teaching for diversity, values-education, teacher/student relationships, teaching in a digital age and teacher burnout. This text aligns with these and explores other areas, in recognition that your early career phase is the pivotal point of how much you commit to being a teacher in the long term.
Resumo:
This paper uses the case study of a hybrid public-private strategic alliance as data to complement and contrast with the traditional views on knowledge transfer and learning between alliance partners. In particular, the paper explores whether the concept of competitive collaboration conceptualized by Hamel (1991) in his seminal work holds true for all forms of strategic alliances. Conceptualizing the knowledge boundaries of organisations in alliances as a ‘collaborative membrane’, we focus attention on the permeability of these boundaries rather than the actual location of the boundaries. In this vein, we present a case study of a major public sector organization that illustrates how these principles have allowed it to start rebuilding its internal capabilities adopting a more collaborative stance and ensuring their knowledge boundaries are highly porous as they move more major projects into hybrid public private alliance contracts.
Resumo:
Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.
Resumo:
The aim of this paper is to provide a contemporary summary of statistical and non-statistical meta-analytic procedures that have relevance to the type of experimental designs often used by sport scientists when examining differences/change in dependent measure(s) as a result of one or more independent manipulation(s). Using worked examples from studies on observational learning in the motor behaviour literature, we adopt a random effects model and give a detailed explanation of the statistical procedures for the three types of raw score difference-based analyses applicable to between-participant, within-participant, and mixed-participant designs. Major merits and concerns associated with these quantitative procedures are identified and agreed methods are reported for minimizing biased outcomes, such as those for dealing with multiple dependent measures from single studies, design variation across studies, different metrics (i.e. raw scores and difference scores), and variations in sample size. To complement the worked examples, we summarize the general considerations required when conducting and reporting a meta-analysis, including how to deal with publication bias, what information to present regarding the primary studies, and approaches for dealing with outliers. By bringing together these statistical and non-statistical meta-analytic procedures, we provide the tools required to clarify understanding of key concepts and principles.
Resumo:
Effective knowledge transfer can prevent the reinvention of systems and ideas as well as the repetition of errors. Doing so will save substantial time, as well as contribute to better performance of projects and project-based organisations (PBOs). Despite the importance of knowledge, PBOs face serious barriers to the effective transfer of knowledge, while their characteristics, such as unique and innovative approaches taken during every project, mean they have much to gain from knowledge transfer. As each new project starts, there is the strong potential to reinvent the process, rather than utilise learning from previous projects. In fact, rework is one of the primary factors contributing to construction industry's poor performance and productivity. Current literature has identified several barriers to knowledge transfer in organisational settings in general, and not specifically PBOs. However, PBOs significantly differ from other types of organisations. PBOs operate mainly on temporary projects, where time is a crucial factor and people are more mobile than in other organisational settings. The aim of this research is to identify the key barriers that prevent effective knowledge transfer for PBOs, exclusively. Interviews with project managers and senior managers of PBOs complement the analysis of the literature and provide professional expertise. This research is crucial to gaining a better understanding of obstacles that hinder knowledge transfer in projects. The main contribution of this research is exclusive for PBO, list of key barriers that organisation and project managers need to consider to ensure effective knowledge transfer and better project management.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
This paper explores models for enabling increased participation in experience based learning in legal professional practice. Legal placements as part of “for-credit” units offer students the opportunity to develop their professional skills in practice, reflect on their learning and job performance and take responsibility for their career development and planning. In short, work integrated learning (WIL) in law supports students in making the transition from university to practice. Despite its importance, WIL has traditionally taken place in practical legal training courses (after graduation) rather than during undergraduate law courses. Undergraduate WIL in Australian law schools has generally been limited to legal clinics which require intensive academic supervision, partnerships with community legal organisations and government funding. This paper will propose two models of WIL for undergraduate law which may overcome many of the challenges to engaging in WIL in law (which are consistent with those identified generally by the WIL Report). The first is a virtual law placement in which students use technology to complete a real world project in a virtual workplace under the guidance of a workplace supervisor. The second enables students to complete placements in private legal firms, government legal offices, or community legal centres under the supervision of a legal practitioner. The units complement each other by a) creating and enabling placement opportunities for students who may not otherwise have been able to participate in work placement by reason of family responsibilities, financial constraints, visa restrictions, distance etc; and b) enabling students to capitalise on existing work experience. This paper will report on the pilot offering of the units in 2008, the evaluation of the models and changes implemented in 2009. It will conclude that this multi-pronged approach can be successful in creating opportunities for, and overcoming barriers to participation in experiential learning in legal professional practice.
Resumo:
Divining the Martyr is a project developed in order to achieve the Master of Arts (Research) degree. This is composed of 70% creative work displayed in an exhibition and 30% written work contained in this exegesis. The project was developed through practice-led research in order to answer the question “In what ways can creative practice synthesize and illuminate issues of martyrdom in contemporary makeover culture?” The question is answered using a postmodern framework about martyrdom as it is manifested in contemporary society. The themes analyzed throughout this exegesis relate to concepts about sainthood and makeover culture combined with actual examples of tragic cases of cosmetic procedures. The outcomes of this project fused three elements: Mexican cultural history, Mexican (Catholic) religious traditions, and cosmetic makeover surgery. The final outcomes were a series of installations integrating contemporary and traditional interdisciplinary media, such as sound, light, x-ray technology, sculpture, video and aspects of performance. These creative works complement each other in their presentation and concept, promoting an original contribution to the theme of contemporary martyrdom in makeover culture.
Resumo:
Age-related maculopathy (ARM) has remained a challenging topic with respect to its aetiology, pathomechanisms, early detection and treatment since the late 19th century when it was first described as its own entity. ARM was previously considered an inflammatory disease, a degenerative disease, a tumor and as the result of choroidal hemodynamic disturbances and ischaemia. The latter processes have been repeatedly suggested to have a key role in its development and progression. In vivo experiments under hypoxic conditions could be models for the ischaemic deficits in ARM. Recent research has also linked ARM with gene polymorphisms. It is however unclear what triggers a person's gene susceptibility. In this manuscript, a linking hypothesis between aetiological factors including ischaemia and genetics and the development of early clinicopathological changes in ARM is proposed. New clinical psychophysical and electrophysiological tests are introduced that can detect ARM at an early stage. Models of early ARM based upon hemodynamic, photoreceptor and post-receptoral deficits are described and the mechanisms by which ischaemia may be involved as a final common pathway are considered. In neovascular age-related macular degeneration (neovascular AMD), ischaemia is thought to promote release of vascular endothelial growth factor (VEGF) which induces chorioretinal neovascularisation. VEGF is critical in the maintenance of the healthy choriocapillaris. In the final section of the manuscript the documentation of the effect of new anti-VEGF treatments on retinal function in neovascular AMD is critically viewed.
Resumo:
A wide range of screening strategies have been employed to isolate antibodies and other proteins with specific attributes, including binding affinity, specificity, stability and improved expression. However, there remains no high-throughput system to screen for target-binding proteins in a mammalian, intracellular environment. Such a system would allow binding reagents to be isolated against intracellular clinical targets such as cell signalling proteins associated with tumour formation (p53, ras, cyclin E), proteins associated with neurodegenerative disorders (huntingtin, betaamyloid precursor protein), and various proteins crucial to viral replication (e.g. HIV-1 proteins such as Tat, Rev and Vif-1), which are difficult to screen by phage, ribosome or cell-surface display. This study used the β-lactamase protein complementation assay (PCA) as the display and selection component of a system for screening a protein library in the cytoplasm of HEK 293T cells. The colicin E7 (ColE7) and Immunity protein 7 (Imm7) *Escherichia coli* proteins were used as model interaction partners for developing the system. These proteins drove effective β-lactamase complementation, resulting in a signal-to-noise ratio (9:1 – 13:1) comparable to that of other β-lactamase PCAs described in the literature. The model Imm7-ColE7 interaction was then used to validate protocols for library screening. Single positive cells that harboured the Imm7 and ColE7 binding partners were identified and isolated using flow cytometric cell sorting in combination with the fluorescent β-lactamase substrate, CCF2/AM. A single-cell PCR was then used to amplify the Imm7 coding sequence directly from each sorted cell. With the screening system validated, it was then used to screen a protein library based the Imm7 scaffold against a proof-of-principle target. The wild-type Imm7 sequence, as well as mutants with wild-type residues in the ColE7- binding loop were enriched from the library after a single round of selection, which is consistent with other eukaryotic screening systems such as yeast and mammalian cell-surface display. In summary, this thesis describes a new technology for screening protein libraries in a mammalian, intracellular environment. This system has the potential to complement existing screening technologies by allowing access to intracellular proteins and expanding the range of targets available to the pharmaceutical industry.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Process models are used by information professionals to convey semantics about the business operations in a real world domain intended to be supported by an information system. The understandability of these models is vital to them actually being used. After all, what is not understood cannot be acted upon. Yet until now, understandability has primarily been defined as an intrinsic quality of the models themselves. Moreover, those studies that looked at understandability from a user perspective have mainly conceptualized users through rather arbitrary sets of variables. In this paper we advance an integrative framework to understand the role of the user in the process of understanding process models. Building on cognitive psychology, goal-setting theory and multimedia learning theory, we identify three stages of learning required to realize model understanding, these being Presage, Process, and Product. We define eight relevant user characteristics in the Presage stage of learning, three knowledge construction variables in the Process stage and three potential learning outcomes in the Product stage. To illustrate the benefits of the framework, we review existing process modeling work to identify where our framework can complement and extend existing studies.
Resumo:
The aim of this paper is to contribute to the understanding of various models used in research for the adoption and diffusion of information technology in small and medium-sized enterprises (SMEs). Starting with Rogers' diffusion theory and behavioural models, technology adoption models used in IS research are discussed. Empirical research has shown that the reasons why firms choose to adopt or not adopt technology is dependent on a number of factors. These factors can be categorised as owner/manager characteristics, firm characteristics and other characteristics. The existing models explaining IS diffusion and adoption by SMEs overlap and complement each other. This paper reviews the existing literature and proposes a comprehensive model which includes the whole array of variables from earlier models.
Resumo:
This study explores young people's creative practice through using Information and Communications Technologies (ICTs) - in one particular learning area - Drama. The study focuses on school-based contexts and the impact of ICT-based interventions within two drama education case studies. The first pilot study involved the use of online spaces to complement a co-curricula performance project. The second focus case was a curriculum-based project with online spaces and digital technologies being used to create a cyberdrama. Each case documents the activity systems, participant experiences and meaning making in specific institutional and technological contexts. The nature of creative practice and learning are analysed, using frameworks drawn from Vygotsky's socio-historical theory (including his work on creativity) and from activity theory. Case study analysis revealed the nature of contradictions encountered and these required an analysis of institutional constraints and the dynamics of power. Cyberdrama offers young people opportunities to explore drama through new modes and the use of ICTs can be seen as contributing different tools, spaces and communities for creative activity. To be able to engage in creative practice using ICTs requires a focus on a range of cultural tools and social practices beyond those of the purely technological. Cybernetic creative practice requires flexibility in the negotiation of tool use and subjects and a system that responds to feedback and can adapt. Classroom-based dramatic practice may allow for the negotiation of power and tool use in the development of collaborative works of the imagination. However, creative practice using ICTs in schools is typically restricted by authoritative power structures and access issues. The research identified participant engagement and meaning making emerging from different factors, with some students showing preferences for embodied creative practice in Drama that did not involve ICTs. The findings of the study suggest ICT-based interventions need to focus on different applications for the technology but also on embodied experience, the negotiation of power, identity and human interactions.
Resumo:
Since 2002 QUT has sponsored a range of first year-focussed initiatives, most recently the Transitions In Project (TIP) which was designed to complement the First Year Experience Program and be a capacity building initiative. A primary focus of TIP was The First Year Curriculum Project: the review, development, implementation and evaluation of first year curriculum which has culminated in the development of a “Good Practice Guide” for the management of large first year units. First year curriculum initiates staff-student relationships and provides the scaffolding for the learning experience and engagement. Good practice in first year curriculum is within the control of the institution and can be redesigned and reviewed to improve outcomes. This session will provide a context for the First Year Curriculum Project and a concise overview of the suite of resources developed that have culminated in the Good Practice Guide.