979 resultados para Complement clause
Resumo:
The Safety Effectiveness Indicators (SEI) Project has used extensive research to determine what safety effectiveness measures can be developed by industry, for industry use to improve its safety performance. These indicators can measure how effectively the 13 safety management tasks1 (SMTs) selected for this workbook are undertaken. Currently, positive performance indicators (PPIs) are only able to measure the number of activities undertaken. They do not provide information on whether each activity is being undertaken effectively, and therefore do not provide data which can be used by industry to target areas of focus and improvement. The initial workbook contained six SMTs, and was piloted on various construction sites during August 2008. The workbook was refined through feedback from the pilot, and 13 SMTs were used in a field trial during the months of October, November and December 2008. The project team also carried out 12 focus groups in Brisbane, Canberra, Sydney and Melbourne during April, May and June 2008, and developed an initial format of this workbook through these groups and team workshops. Simplification of the language was a recurring theme, and we have attempted to do this throughout the project. The challenge has been to ensure we keep the descriptions short, to the point and relevant to all companies, without making them too specific. The majority of the construction industry participants also requested an alteration to the scale used, so a ‘Yes’/‘No’/’Not applicable’ format is used in this workbook. This workbook, based on industry feedback, is for use on site by various construction companies and contains 13 SMTs. However, you are invited to personalise the SEI tools to better suit your individual company and workplaces.
Resumo:
Cohen (1977) reviewed the then current research on occupational safety and stated that both strong company commitment to safety, and communication between all levels of a company are the most influential factors to improving safety. Other relevant factors included careful selection of staff, and early and continuous training throughout the lifetime with the company. These continue to be important factors in OHS today. There has been a continued decrease in the injury rates since Cohen’s review within the Australian construction industry, however, the construction industry has far more injuries and ill-health than the Australian average, with one fatality occurring on average per week in the Australian Construction Industry. The Fatality rate in the building and construction industry remains three times higher than the national average, and 15% of all industry fatalities are in the building and construction industry. In addition the construction industry pays one of the highest workers’ compensation premium rates – in 2001 alone approximately 0.5% ($267 million) of revenue would have to be allocated to the direct cost of 1998/99 compensations (Office of the Federal Safety Commissioner, 2006). Based on these statistics there is a need to measure and improve safety performance within the construction industry.
Resumo:
Effective knowledge transfer can prevent the reinvention of systems and ideas as well as the repetition of errors. Doing so will save substantial time, as well as contribute to better performance of projects and project-based organisations (PBOs). Despite the importance of knowledge, PBOs face serious barriers to the effective transfer of knowledge, while their characteristics, such as unique and innovative approaches taken during every project, mean they have much to gain from knowledge transfer. As each new project starts, there is the strong potential to reinvent the process, rather than utilise learning from previous projects. In fact, rework is one of the primary factors contributing to construction industry's poor performance and productivity. Current literature has identified several barriers to knowledge transfer in organisational settings in general, and not specifically PBOs. However, PBOs significantly differ from other types of organisations. PBOs operate mainly on temporary projects, where time is a crucial factor and people are more mobile than in other organisational settings. The aim of this research is to identify the key barriers that prevent effective knowledge transfer for PBOs, exclusively. Interviews with project managers and senior managers of PBOs complement the analysis of the literature and provide professional expertise. This research is crucial to gaining a better understanding of obstacles that hinder knowledge transfer in projects. The main contribution of this research is exclusive for PBO, list of key barriers that organisation and project managers need to consider to ensure effective knowledge transfer and better project management.
Resumo:
A major project in the Sustainable Built Assets core area is the Sustainable Sub-divisions – Ventilation Project that is the second stage of a planned series of research projects focusing on sustainable sub-divisions. The initial project, Sustainable Sub-divisions: Energy focused on energy efficiency and examined the link between dwelling energy efficiency and sub-divisional layout. In addition, the potential for on site electricity generation, especially in medium and high-density developments, was also examined. That project recommended that an existing lot-rating methodology be adapted for use in SEQ through the inclusion of sub divisional appropriate ventilation data. Acquiring that data is the object of this project. The Sustainable Sub-divisions; Ventilation Project will produce a series of reports. The first report (Report 2002-077-B-01) summarised the results from an industry workshop and interviews that were conducted to ascertain the current attitudes and methodologies used in contemporary sub-division design in South East Queensland. The second report (Report 2002-077-B-02) described how the project is being delivered as outlined in the Project Agreement. It included the selection of the case study dwellings and monitoring equipment and data management process. This third report (Report 2002-077-B-03) provides an analysis and review of the approaches recommended by leading experts, government bodies and professional organizations throughout Australia that aim to increase the potential for passive cooling and heating at the subdivision stage. This data will inform issues discussed on the development of the enhanced lot-rating methodology in other reports of this series. The final report, due in June 2007, will detail the analysis of data for winter 2006 and summer 2007, leading to the development and delivery of the enhanced lot-rating methodology.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
The Cooperative Research Centre (CRC) for Construction Innovation is Australia’s national centre for research and innovation focused on the needs of the property, design, construction and facility management sectors. The period covered by this report is from 1 July 2001 to 30 October 2009. The report comprises of two parts. Part A details the future and possible future impact of the CRC including: The Future of the CRC, Research and Commercialisation, Economic Benefit to Australia, Public Good Benefits to Australia. Part B details the achievements during the funding period including: Economic Benefit and Commercialisation, Uptake of Research Results, Impact of Education Programs, CRC Operations.
Resumo:
This paper explores models for enabling increased participation in experience based learning in legal professional practice. Legal placements as part of “for-credit” units offer students the opportunity to develop their professional skills in practice, reflect on their learning and job performance and take responsibility for their career development and planning. In short, work integrated learning (WIL) in law supports students in making the transition from university to practice. Despite its importance, WIL has traditionally taken place in practical legal training courses (after graduation) rather than during undergraduate law courses. Undergraduate WIL in Australian law schools has generally been limited to legal clinics which require intensive academic supervision, partnerships with community legal organisations and government funding. This paper will propose two models of WIL for undergraduate law which may overcome many of the challenges to engaging in WIL in law (which are consistent with those identified generally by the WIL Report). The first is a virtual law placement in which students use technology to complete a real world project in a virtual workplace under the guidance of a workplace supervisor. The second enables students to complete placements in private legal firms, government legal offices, or community legal centres under the supervision of a legal practitioner. The units complement each other by a) creating and enabling placement opportunities for students who may not otherwise have been able to participate in work placement by reason of family responsibilities, financial constraints, visa restrictions, distance etc; and b) enabling students to capitalise on existing work experience. This paper will report on the pilot offering of the units in 2008, the evaluation of the models and changes implemented in 2009. It will conclude that this multi-pronged approach can be successful in creating opportunities for, and overcoming barriers to participation in experiential learning in legal professional practice.
Resumo:
Divining the Martyr is a project developed in order to achieve the Master of Arts (Research) degree. This is composed of 70% creative work displayed in an exhibition and 30% written work contained in this exegesis. The project was developed through practice-led research in order to answer the question “In what ways can creative practice synthesize and illuminate issues of martyrdom in contemporary makeover culture?” The question is answered using a postmodern framework about martyrdom as it is manifested in contemporary society. The themes analyzed throughout this exegesis relate to concepts about sainthood and makeover culture combined with actual examples of tragic cases of cosmetic procedures. The outcomes of this project fused three elements: Mexican cultural history, Mexican (Catholic) religious traditions, and cosmetic makeover surgery. The final outcomes were a series of installations integrating contemporary and traditional interdisciplinary media, such as sound, light, x-ray technology, sculpture, video and aspects of performance. These creative works complement each other in their presentation and concept, promoting an original contribution to the theme of contemporary martyrdom in makeover culture.
Resumo:
Age-related maculopathy (ARM) has remained a challenging topic with respect to its aetiology, pathomechanisms, early detection and treatment since the late 19th century when it was first described as its own entity. ARM was previously considered an inflammatory disease, a degenerative disease, a tumor and as the result of choroidal hemodynamic disturbances and ischaemia. The latter processes have been repeatedly suggested to have a key role in its development and progression. In vivo experiments under hypoxic conditions could be models for the ischaemic deficits in ARM. Recent research has also linked ARM with gene polymorphisms. It is however unclear what triggers a person's gene susceptibility. In this manuscript, a linking hypothesis between aetiological factors including ischaemia and genetics and the development of early clinicopathological changes in ARM is proposed. New clinical psychophysical and electrophysiological tests are introduced that can detect ARM at an early stage. Models of early ARM based upon hemodynamic, photoreceptor and post-receptoral deficits are described and the mechanisms by which ischaemia may be involved as a final common pathway are considered. In neovascular age-related macular degeneration (neovascular AMD), ischaemia is thought to promote release of vascular endothelial growth factor (VEGF) which induces chorioretinal neovascularisation. VEGF is critical in the maintenance of the healthy choriocapillaris. In the final section of the manuscript the documentation of the effect of new anti-VEGF treatments on retinal function in neovascular AMD is critically viewed.
Resumo:
These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIM-compatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.
Resumo:
These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIMcompatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.
Resumo:
A wide range of screening strategies have been employed to isolate antibodies and other proteins with specific attributes, including binding affinity, specificity, stability and improved expression. However, there remains no high-throughput system to screen for target-binding proteins in a mammalian, intracellular environment. Such a system would allow binding reagents to be isolated against intracellular clinical targets such as cell signalling proteins associated with tumour formation (p53, ras, cyclin E), proteins associated with neurodegenerative disorders (huntingtin, betaamyloid precursor protein), and various proteins crucial to viral replication (e.g. HIV-1 proteins such as Tat, Rev and Vif-1), which are difficult to screen by phage, ribosome or cell-surface display. This study used the β-lactamase protein complementation assay (PCA) as the display and selection component of a system for screening a protein library in the cytoplasm of HEK 293T cells. The colicin E7 (ColE7) and Immunity protein 7 (Imm7) *Escherichia coli* proteins were used as model interaction partners for developing the system. These proteins drove effective β-lactamase complementation, resulting in a signal-to-noise ratio (9:1 – 13:1) comparable to that of other β-lactamase PCAs described in the literature. The model Imm7-ColE7 interaction was then used to validate protocols for library screening. Single positive cells that harboured the Imm7 and ColE7 binding partners were identified and isolated using flow cytometric cell sorting in combination with the fluorescent β-lactamase substrate, CCF2/AM. A single-cell PCR was then used to amplify the Imm7 coding sequence directly from each sorted cell. With the screening system validated, it was then used to screen a protein library based the Imm7 scaffold against a proof-of-principle target. The wild-type Imm7 sequence, as well as mutants with wild-type residues in the ColE7- binding loop were enriched from the library after a single round of selection, which is consistent with other eukaryotic screening systems such as yeast and mammalian cell-surface display. In summary, this thesis describes a new technology for screening protein libraries in a mammalian, intracellular environment. This system has the potential to complement existing screening technologies by allowing access to intracellular proteins and expanding the range of targets available to the pharmaceutical industry.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Process models are used by information professionals to convey semantics about the business operations in a real world domain intended to be supported by an information system. The understandability of these models is vital to them actually being used. After all, what is not understood cannot be acted upon. Yet until now, understandability has primarily been defined as an intrinsic quality of the models themselves. Moreover, those studies that looked at understandability from a user perspective have mainly conceptualized users through rather arbitrary sets of variables. In this paper we advance an integrative framework to understand the role of the user in the process of understanding process models. Building on cognitive psychology, goal-setting theory and multimedia learning theory, we identify three stages of learning required to realize model understanding, these being Presage, Process, and Product. We define eight relevant user characteristics in the Presage stage of learning, three knowledge construction variables in the Process stage and three potential learning outcomes in the Product stage. To illustrate the benefits of the framework, we review existing process modeling work to identify where our framework can complement and extend existing studies.