228 resultados para Star complement
Resumo:
Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.
Resumo:
The aim of this paper is to provide a contemporary summary of statistical and non-statistical meta-analytic procedures that have relevance to the type of experimental designs often used by sport scientists when examining differences/change in dependent measure(s) as a result of one or more independent manipulation(s). Using worked examples from studies on observational learning in the motor behaviour literature, we adopt a random effects model and give a detailed explanation of the statistical procedures for the three types of raw score difference-based analyses applicable to between-participant, within-participant, and mixed-participant designs. Major merits and concerns associated with these quantitative procedures are identified and agreed methods are reported for minimizing biased outcomes, such as those for dealing with multiple dependent measures from single studies, design variation across studies, different metrics (i.e. raw scores and difference scores), and variations in sample size. To complement the worked examples, we summarize the general considerations required when conducting and reporting a meta-analysis, including how to deal with publication bias, what information to present regarding the primary studies, and approaches for dealing with outliers. By bringing together these statistical and non-statistical meta-analytic procedures, we provide the tools required to clarify understanding of key concepts and principles.
Resumo:
Effective knowledge transfer can prevent the reinvention of systems and ideas as well as the repetition of errors. Doing so will save substantial time, as well as contribute to better performance of projects and project-based organisations (PBOs). Despite the importance of knowledge, PBOs face serious barriers to the effective transfer of knowledge, while their characteristics, such as unique and innovative approaches taken during every project, mean they have much to gain from knowledge transfer. As each new project starts, there is the strong potential to reinvent the process, rather than utilise learning from previous projects. In fact, rework is one of the primary factors contributing to construction industry's poor performance and productivity. Current literature has identified several barriers to knowledge transfer in organisational settings in general, and not specifically PBOs. However, PBOs significantly differ from other types of organisations. PBOs operate mainly on temporary projects, where time is a crucial factor and people are more mobile than in other organisational settings. The aim of this research is to identify the key barriers that prevent effective knowledge transfer for PBOs, exclusively. Interviews with project managers and senior managers of PBOs complement the analysis of the literature and provide professional expertise. This research is crucial to gaining a better understanding of obstacles that hinder knowledge transfer in projects. The main contribution of this research is exclusive for PBO, list of key barriers that organisation and project managers need to consider to ensure effective knowledge transfer and better project management.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
This paper explores models for enabling increased participation in experience based learning in legal professional practice. Legal placements as part of “for-credit” units offer students the opportunity to develop their professional skills in practice, reflect on their learning and job performance and take responsibility for their career development and planning. In short, work integrated learning (WIL) in law supports students in making the transition from university to practice. Despite its importance, WIL has traditionally taken place in practical legal training courses (after graduation) rather than during undergraduate law courses. Undergraduate WIL in Australian law schools has generally been limited to legal clinics which require intensive academic supervision, partnerships with community legal organisations and government funding. This paper will propose two models of WIL for undergraduate law which may overcome many of the challenges to engaging in WIL in law (which are consistent with those identified generally by the WIL Report). The first is a virtual law placement in which students use technology to complete a real world project in a virtual workplace under the guidance of a workplace supervisor. The second enables students to complete placements in private legal firms, government legal offices, or community legal centres under the supervision of a legal practitioner. The units complement each other by a) creating and enabling placement opportunities for students who may not otherwise have been able to participate in work placement by reason of family responsibilities, financial constraints, visa restrictions, distance etc; and b) enabling students to capitalise on existing work experience. This paper will report on the pilot offering of the units in 2008, the evaluation of the models and changes implemented in 2009. It will conclude that this multi-pronged approach can be successful in creating opportunities for, and overcoming barriers to participation in experiential learning in legal professional practice.
Resumo:
The social construction of sexuality over the past one hundred and fifty years has created a dichotomy between heterosexual and non-heterosexual identities that essentially positions the former as “normal” and the latter as deviant. Even Kinsey’s and others’ work on the continuum of sexualities did little to alter the predominantly heterosexist perception of the non-heterosexual as “other” (Kinsey, Pomeroy and Martin 2007; Esterberg 2006; Franceour and Noonan 2007). Some political action and academic work is beginning to challenge such perceptions. Even some avenues of social interaction, such as the recent proliferation of online communities, may also challenge such views, or at least contribute to their being rethought in some ways. This chapter explores a specific kind of online community devoted to fan fiction, specifically homoerotic – or what is known colloquially as “slash” – fan fiction. Fan fiction is fiction, published on the internet, and written by fans of well-known books and television shows, using the characters to create new and varied plots. “Slash” refers to the pairing of two of the male characters in a romantic relationship, and the term comes from the punctuation mark dividing the named pair as, for example, Spock/Kirk from the Star Trek television series. Although there are some slash fan-fiction stories devoted to female-female relationships – called “femmeslash” – the term “slash” generally refers to male-male relationships, and will be utilized throughout this chapter, given that the research discussed focuses on communities centered around one such male pairing.
Resumo:
Divining the Martyr is a project developed in order to achieve the Master of Arts (Research) degree. This is composed of 70% creative work displayed in an exhibition and 30% written work contained in this exegesis. The project was developed through practice-led research in order to answer the question “In what ways can creative practice synthesize and illuminate issues of martyrdom in contemporary makeover culture?” The question is answered using a postmodern framework about martyrdom as it is manifested in contemporary society. The themes analyzed throughout this exegesis relate to concepts about sainthood and makeover culture combined with actual examples of tragic cases of cosmetic procedures. The outcomes of this project fused three elements: Mexican cultural history, Mexican (Catholic) religious traditions, and cosmetic makeover surgery. The final outcomes were a series of installations integrating contemporary and traditional interdisciplinary media, such as sound, light, x-ray technology, sculpture, video and aspects of performance. These creative works complement each other in their presentation and concept, promoting an original contribution to the theme of contemporary martyrdom in makeover culture.
Resumo:
Age-related maculopathy (ARM) has remained a challenging topic with respect to its aetiology, pathomechanisms, early detection and treatment since the late 19th century when it was first described as its own entity. ARM was previously considered an inflammatory disease, a degenerative disease, a tumor and as the result of choroidal hemodynamic disturbances and ischaemia. The latter processes have been repeatedly suggested to have a key role in its development and progression. In vivo experiments under hypoxic conditions could be models for the ischaemic deficits in ARM. Recent research has also linked ARM with gene polymorphisms. It is however unclear what triggers a person's gene susceptibility. In this manuscript, a linking hypothesis between aetiological factors including ischaemia and genetics and the development of early clinicopathological changes in ARM is proposed. New clinical psychophysical and electrophysiological tests are introduced that can detect ARM at an early stage. Models of early ARM based upon hemodynamic, photoreceptor and post-receptoral deficits are described and the mechanisms by which ischaemia may be involved as a final common pathway are considered. In neovascular age-related macular degeneration (neovascular AMD), ischaemia is thought to promote release of vascular endothelial growth factor (VEGF) which induces chorioretinal neovascularisation. VEGF is critical in the maintenance of the healthy choriocapillaris. In the final section of the manuscript the documentation of the effect of new anti-VEGF treatments on retinal function in neovascular AMD is critically viewed.
Resumo:
A wide range of screening strategies have been employed to isolate antibodies and other proteins with specific attributes, including binding affinity, specificity, stability and improved expression. However, there remains no high-throughput system to screen for target-binding proteins in a mammalian, intracellular environment. Such a system would allow binding reagents to be isolated against intracellular clinical targets such as cell signalling proteins associated with tumour formation (p53, ras, cyclin E), proteins associated with neurodegenerative disorders (huntingtin, betaamyloid precursor protein), and various proteins crucial to viral replication (e.g. HIV-1 proteins such as Tat, Rev and Vif-1), which are difficult to screen by phage, ribosome or cell-surface display. This study used the β-lactamase protein complementation assay (PCA) as the display and selection component of a system for screening a protein library in the cytoplasm of HEK 293T cells. The colicin E7 (ColE7) and Immunity protein 7 (Imm7) *Escherichia coli* proteins were used as model interaction partners for developing the system. These proteins drove effective β-lactamase complementation, resulting in a signal-to-noise ratio (9:1 – 13:1) comparable to that of other β-lactamase PCAs described in the literature. The model Imm7-ColE7 interaction was then used to validate protocols for library screening. Single positive cells that harboured the Imm7 and ColE7 binding partners were identified and isolated using flow cytometric cell sorting in combination with the fluorescent β-lactamase substrate, CCF2/AM. A single-cell PCR was then used to amplify the Imm7 coding sequence directly from each sorted cell. With the screening system validated, it was then used to screen a protein library based the Imm7 scaffold against a proof-of-principle target. The wild-type Imm7 sequence, as well as mutants with wild-type residues in the ColE7- binding loop were enriched from the library after a single round of selection, which is consistent with other eukaryotic screening systems such as yeast and mammalian cell-surface display. In summary, this thesis describes a new technology for screening protein libraries in a mammalian, intracellular environment. This system has the potential to complement existing screening technologies by allowing access to intracellular proteins and expanding the range of targets available to the pharmaceutical industry.
Resumo:
A television series is tagged with the label "cult" by the media, advertisers, and network executives when it is considered edgy or offbeat, when it appeals to nostalgia, or when it is considered emblematic of a particular subculture. By these criteria, almost any series could be described as cult. Yet certain programs exert an uncanny power over their fans, encouraging them to immerse themselves within a fictional world.In Cult Television leading scholars examine such shows as The X-Files; The Avengers; Doctor Who, Babylon Five; Star Trek; Xena, Warrior Princess; and Buffy the Vampire Slayer to determine the defining characteristics of cult television and map the contours of this phenomenon within the larger scope of popular culture.Contributors: Karen Backstein; David A. Black, Seton Hall U; Mary Hammond, Open U; Nathan Hunt, U of Nottingham; Mark Jancovich; Petra Kuppers, Bryant College; Philippe Le Guern, U of Angers, France; Alan McKee; Toby Miller, New York U; Jeffrey Sconce, Northwestern U; Eva ViethSara Gwenllian-Jones is a lecturer in television and digital media at Cardiff University and co-editor of Intensities: The Journal of Cult Media.Roberta E. Pearson is a reader in media and cultural studies at Cardiff University. She is the author of the forthcoming book Small Screen, Big Universe: Star Trek and Television.
Resumo:
The concept of star rating council facilities has progressively gained traction in Australia following the work of Dean Taylor at Marochy Shire Council in Queensland in 2006 – 2007 and more recently by the Victorian STEP asset management program. The following paper provides a brief discussion on the use and merits of star rating within community asset management. We suggest that the current adoption of the star rating system to manage community investment in services is lacking in consistency. It is suggested that the major failing is a lack of clear understanding in the purpose being served by the systems. The discussion goes on to make some recommendations on how the concept of a star system could be further enhanced to serve the needs of our communities better.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Brain-derived neurotrophic factor (BDNF) gene : no major impact on antidepressant treatment response
Resumo:
The brain-derived neurotrophic factor (BDNF) has been suggested to play a pivotal role in the aetiology of affective disorders. In order to further clarify the impact of BDNF gene variation on major depression as well as antidepressant treatment response, association of three BDNF polymorphisms [rs7103411, Val66Met (rs6265) and rs7124442] with major depression and antidepressant treatment response was investigated in an overall sample of 268 German patients with major depression and 424 healthy controls. False discovery rate (FDR) was applied to control for multiple testing. Additionally, ten markers in BDNF were tested for association with citalopram outcome in the STAR*D sample. While BDNF was not associated with major depression as a categorical diagnosis, the BDNF rs7124442 TT genotype was significantly related to worse treatment outcome over 6 wk in major depression (p=0.01) particularly in anxious depression (p=0.003) in the German sample. However, BDNF rs7103411 and rs6265 similarly predicted worse treatment response over 6 wk in clinical subtypes of depression such as melancholic depression only (rs7103411: TT
Resumo:
Process models are used by information professionals to convey semantics about the business operations in a real world domain intended to be supported by an information system. The understandability of these models is vital to them actually being used. After all, what is not understood cannot be acted upon. Yet until now, understandability has primarily been defined as an intrinsic quality of the models themselves. Moreover, those studies that looked at understandability from a user perspective have mainly conceptualized users through rather arbitrary sets of variables. In this paper we advance an integrative framework to understand the role of the user in the process of understanding process models. Building on cognitive psychology, goal-setting theory and multimedia learning theory, we identify three stages of learning required to realize model understanding, these being Presage, Process, and Product. We define eight relevant user characteristics in the Presage stage of learning, three knowledge construction variables in the Process stage and three potential learning outcomes in the Product stage. To illustrate the benefits of the framework, we review existing process modeling work to identify where our framework can complement and extend existing studies.
Resumo:
Poly(L-lactide-co-succinic anhydride) networks were synthesised via the carbodiimide-mediated coupling of poly(L-lactide) (PLLA) star polymers. When 4-(dimethylamino)pyridine (DMAP) alone was used as the catalyst gelation did not occur. However, when 4-(dimethylamino)pyridinium p-toluenesulfonate (DPTS), the salt of DMAP and p-toluenesulfonic acid (PTSA), was the catalyst, the networks obtained had gel fractions comparable to those which were reported for networks synthesised by conventional methods. Greater gel fractions and conversion of the prepolymer terminal hydroxyl groups were observed when the hydroxyl-terminated star prepolymers reacted with succinic anhydride in a one-pot procedure than when the hydroxyl-terminated star prepolymers reacted with presynthesised succinic-terminated star prepolymers. The thermal properties of the networks, glass transition temperature (Tg), melting temperature (Tm) and crystallinity (Xc) were all strongly influenced by the average molecular weights between the crosslinks ((M_c). The network with the smallest (M_c )(1400 g/mol) was amorphous and had a Tg of 59 °C while the network with the largest (M_c ) (7800 g/mol) was 15 % crystalline and had a Tg of 56 °C.