371 resultados para strengths-focused
Resumo:
The research on project learning has recognised the significance of knowledge transfer in project based organisations (PBOs). Effective knowledge transfer across projects avoids reinventions, enhances knowledge creation and saves lots of time that is crucial in project environment. In order to facilitate knowledge transfer, many PBOs have invested lots of financial and human resources to implement IT-based knowledge repository. However, some empirical studies found that employees would rather turn for knowledge to colleagues despite their ready access to IT-based knowledge repository. Therefore, it is apparent that social networks play a pivotal role in the knowledge transfer across projects. Some scholars attempt to explore the effect of network structure on knowledge transfer and performance, however, focused only on egocentric networks and the groups’ internal social networks. It has been found that the project’s external social network is also critical, in that the team members can not handle critical situations and accomplish the projects on time without the assistance and knowledge from external sources. To date, the influence of the structure of a project team’s internal and external social networks on project performance, and the interrelation between both networks are barely known. In order to obtain such knowledge, this paper explores the interrelation between the structure of a project team’s internal and external social networks, and their effect on the project team’s performance. Data is gathered through survey questionnaire distributed online to respondents. Collected data is analysed applying social network analysis (SNA) tools and SPSS. The theoretical contribution of this paper is the knowledge of the interrelation between the structure of a project team’s internal and external social networks and their influence on the project team’s performance. The practical contribution lies in the guideline to be proposed for constructing the structure of project team’s internal and external social networks.
Resumo:
A major project in the Sustainable Built Assets core area is the Sustainable Sub-divisions – Ventilation Project that is the second stage of a planned series of research projects focusing on sustainable sub-divisions. The initial project, Sustainable Sub-divisions: Energy focused on energy efficiency and examined the link between dwelling energy efficiency and sub-divisional layout. In addition, the potential for on site electricity generation, especially in medium and high-density developments, was also examined. That project recommended that an existing lot-rating methodology be adapted for use in SEQ through the inclusion of sub divisional appropriate ventilation data. Acquiring that data is the object of this project. The Sustainable Sub-divisions; Ventilation Project will produce a series of reports. The first report (Report 2002-077-B-01) summarised the results from an industry workshop and interviews that were conducted to ascertain the current attitudes and methodologies used in contemporary sub-division design in South East Queensland. The second report (Report 2002-077-B-02) described how the project is being delivered as outlined in the Project Agreement. It included the selection of the case study dwellings and monitoring equipment and data management process. This third report (Report 2002-077-B-03) provides an analysis and review of the approaches recommended by leading experts, government bodies and professional organizations throughout Australia that aim to increase the potential for passive cooling and heating at the subdivision stage. This data will inform issues discussed on the development of the enhanced lot-rating methodology in other reports of this series. The final report, due in June 2007, will detail the analysis of data for winter 2006 and summer 2007, leading to the development and delivery of the enhanced lot-rating methodology.
Resumo:
China’s accession to the World Trade Organisation (WTO) has greatly enhanced global interest in investment in the Chinese media market, where demand for digital content is growing rapidly. The East Asian region is positioned as a growth area in many forms of digital content and digital service industries. China is attempting to catch up and take its place as a production centre to offset challenges from neighbouring countries. Meanwhile, Taiwan is seeking to use China both as an export market and as a production site for its digital content. This research investigates entry strategies of Taiwanese digital content firms into the Chinese market. By examining the strategies of a sample of Taiwan-based companies, this study also explores the evolution of their market strategies. However, the focus is on how distinctive business practices such as guanxi are important to Taiwanese business and to relations with Mainland China. This research examines how entrepreneurs manage the characteristics of digital content products and in turn how digital content entrepreneurs adapt to changing market circumstances. This project selected five Taiwan-based digital content companies that have business operations in China: Wang Film, Artkey, CnYES, Somode and iPartment. The study involved a field trip, undertaken between November 2006 and March 2007 to Shanghai and Taiwan to conduct interviews and to gather documentation and archival reports. Six senior managers and nine experts were interviewed. Data were analysed according to Miller’s firm-level entrepreneurship theory, foreign direct investment theory, Life Cycle Model and guanxi philosophy. Most studies of SMEs have focused on free market (capitalist) environments. In contrast, this thesis examines how Taiwanese digital content firms’ strategies apply in the Chinese market. I identified three main types of business strategy: cost-reduction, innovation and quality-enhancement; and four categories of functional strategies: product, marketing, resource acquisition and organizational restructuring. In this study, I introduce the concept of ‘entrepreneurial guanxi’, special relationships that imply mutual obligation, assurance and understanding to secure and exchange favors in entrepreneurial activities. While guanxi is a feature of many studies of business in Pan-Chinese society, it plays an important mediating role in digital content industries. In this thesis, I integrate the ‘Life Cycle Model’ with the dynamic concept of strategy. I outline the significant differences in the evolution of strategy between two types of digital content companies: off-line firms (Wang Film and Artkey) and web-based firms (CnYES, Somode and iPartment). Off-line digital content firms tended to adopt ‘resource acquisition strategies’ in their initial stages and ‘marketing strategies’ in second and subsequent stages. In contrast, web-based digital content companies mainly adopted product and marketing strategies in the early stages, and would adopt innovative approaches towards product and marketing strategies in the whole process of their business development. Some web-based digital content companies also adopted organizational restructuring strategies in the final stage. Finally, I propose the ‘Taxonomy Matrix of Entrepreneurial Strategies’ to emphasise the two dimensions of this matrix: innovation, and the firm’s resource acquisition for entrepreneurial strategy. This matrix is divided into four cells: Effective, Bounded, Conservative, and Impoverished.
Resumo:
Sustainable natural resource management has been a concern of governments and legislators for the last 20 years. A key aspect of an effective management framework is easy access to information about rights and obligations in land and the natural resources in, on or below the land. Information about legal interests in land is managed through a Torrens register in each Australian State. These registers are primarily focused on the registration of a narrow group of legal interests in the land, and rights or obligations that fall outside of these recognised interests are not capable of registration. Practices have developed however for the recording of property rights in natural resources either on separate registers, with no link to the Torrens register or on a separate register managed by the Registrar of Titles but having no legal effect on the title to the land. This paper will discuss and analyse the various ways in which registers have been used in Queensland to provide access to information about rights in natural resources, and provide examples as to how this approach has impacted on the desire for sustainable management. It will also provide a critique of the Queensland model, and call for reform of the present system.
Resumo:
Over the last few years more stringent environmental laws (e.g. the German “Energie¬ein-sparverordnung ENEV” - Energy Performance of Buildings Directive) and soaring energy prices has increased the need for the real estate industry to react and participate in overall energy reduction through efficient house construction and design, as well as upgrading the existing housing stock to be more energy efficient. Therefore the Property Economics Group at Queensland University of Technology in Australia and Nuertingen-Geislingen University in Germany are carrying out research in relation to sustainable housing construction and public awareness of “green” residential property. Part of this research is to gain an understanding of the level of knowledge and importance of these issues to the house buyer and to determine the importance of sustainable housing to the general public. The paper compares data from two different empirical studies; one of studies analyzes the situation in New Zealand, the other is focused on Germany.
Resumo:
Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.
Resumo:
The overall purpose of this study was to develop a model to inform the design of professional development programs and the implementation of cooperative learning within Thai primary school mathematics classrooms. Action research design, with interviews, surveys and observations, was used for this study. Survey questionnaires and classroom observations investigated the factors that influence the implementation of cooperative learning strategies and academic achievement in Thai primary school mathematics classrooms. The teachers’ interviews and classroom observation also examined the factors that need to be addressed in teacher professional development programs in order to facilitate cooperative learning in Thai mathematics classrooms. The outcome of this study was a model consisting of two sets of criteria to inform the successful implementation of cooperative learning in Thai primary schools. The first set of criteria was for proposers and developers of professional development programs. This set consists of macro- and micro-level criteria. The macro-level criteria focus on the overall structure of professional development programs and how and when the professional development programs should be implemented. The micro-level criteria focused on the specific topics that need to be included in professional development programs. The second set of criteria was for Thai principals and teachers to facilitate the introduction of cooperative learning in their classrooms. The research outcome also indicated that the attainment of these cooperative learning strategies and skills had a positive impact on the students’ learning of mathematics.
Training young people as researchers to investigate engagement and disengagement in the middle years
Resumo:
This paper reports on the first stage of a study that used Young People as Researchers to investigate the phenomenon of middle-year student engagement and disengagement. The first stage of the study focused on a two-day workshop that provided training for students and teachers from four secondary schools in conducting research in their schools. An overview of the three stages is presented and the workshop procedures and example activities for Stage 1 of the Young People as Researchers model are described. Further to this, the paper reports on data collected in the workshop to address the research question: How do middleyear students describe engagement and disengagement?
Resumo:
In recent years, practitioners and researchers alike have turned their attention to knowledge management (KM) in order to increase organisational performance (OP). As a result, many different approaches and strategies have been investigated and suggested for how knowledge should be managed to make organisations more effective and efficient. However, most research has been undertaken in the for-profit sector, with only a few studies focusing on the benefits nonprofit organisations might gain by managing knowledge. This study broadly investigates the impact of knowledge management on the organisational performance of nonprofit organisations. Organisational performance can be evaluated through either financial or non-financial measurements. In order to evaluate knowledge management and organisational performance, non-financial measurements are argued to be more suitable given that knowledge is an intangible asset which often cannot be expressed through financial indicators. Non-financial measurement concepts of performance such as the balanced scorecard or the concept of Intellectual Capital (IC) are well accepted and used within the for-profit and nonprofit sectors to evaluate organisational performance. This study utilised the concept of IC as the method to evaluate KM and OP in the context of nonprofit organisations due to the close link between KM and IC: Indeed, KM is concerned with managing the KM processes of creating, storing, sharing and applying knowledge and the organisational KM infrastructure such as organisational culture or organisational structure to support these processes. On the other hand, IC measures the knowledge stocks in different ontological levels: at the individual level (human capital), at the group level (relational capital) and at the organisational level (structural capital). In other words, IC measures the value of the knowledge which has been managed through KM. As KM encompasses the different KM processes and the KM infrastructure facilitating these processes, previous research has investigated the relationship between KM infrastructure and KM processes. Organisational culture, organisational structure and the level of IT support have been identified as the main factors of the KM infrastructure influencing the KM processes of creating, storing, sharing and applying knowledge. Other research has focused on the link between KM and OP or organisational effectiveness. Based on existing literature, a theoretical model was developed to enable the investigation of the relation between KM (encompassing KM infrastructure and KM processes) and IC. The model assumes an association between KM infrastructure and KM processes, as well as an association between KM processes and the various levels of IC (human capital, structural capital and relational capital). As a result, five research questions (RQ) with respect to the various factors of the KM infrastructure as well as with respect to the relationship between KM infrastructure and IC were raised and included into the research model: RQ 1 Do nonprofit organisations which have a Hierarchy culture have a stronger IT support than nonprofit organisations which have an Adhocracy culture? RQ 2 Do nonprofit organisations which have a centralised organisational structure have a stronger IT support than nonprofit organisations which have decentralised organisational structure? RQ 3 Do nonprofit organisations which have a stronger IT support have a higher value of Human Capital than nonprofit organisations which have a less strong IT support? RQ 4 Do nonprofit organisations which have a stronger IT support have a higher value of Structural Capital than nonprofit organisations which have a less strong IT support? RQ 5 Do nonprofit organisations which have a stronger IT support have a higher value of Relational Capital than nonprofit organisations which have a less strong IT support? In order to investigate the research questions, measurements for IC were developed which were linked to the main KM processes. The final KM/IC model contained four items for evaluating human capital, five items for evaluating structural capital and four items for evaluating relational capital. The research questions were investigated through empirical research using a case study approach with the focus on two nonprofit organisations providing trade promotions services through local offices worldwide. Data for the investigation of the assumptions were collected via qualitative as well as quantitative research methods. The qualitative study included interviews with representatives of the two participating organisations as well as in-depth document research. The purpose of the qualitative study was to investigate the factors of the KM infrastructure (organisational culture, organisational structure, IT support) of the organisations and how these factors were related to each other. On the other hand, the quantitative study was carried out through an online-survey amongst staff of the various local offices. The purpose of the quantitative study was to investigate which impact the level of IT support, as the main instrument of the KM infrastructure, had on IC. Overall several key themes were found as a result of the study: • Knowledge Management and Intellectual Capital were complementary with each other, which should be expressed through measurements of IC based on KM processes. • The various factors of the KM infrastructure (organisational culture, organisational structure and level of IT support) are interdependent. • IT was a primary instrument through which the different KM processes (creating, storing, sharing and applying knowledge) were performed. • A high level of IT support was evident when participants reported higher level of IC (human capital, structural capital and relational capital). The study supported previous research in the field of KM and replicated the findings from other case studies in this area. The study also contributed to theory by placing the KM research within the nonprofit context and analysing the linkage between KM and IC. From the managerial perspective, the findings gave clear indications that would allow interested parties, such as nonprofit managers or consultants to understand more about the implications of KM on OP and to use this knowledge for implementing efficient and effective KM strategies within their organisations.
Resumo:
Neurodegenerative disorders are heterogenous in nature and include a range of ataxias with oculomotor apraxia, which are characterised by a wide variety of neurological and ophthalmological features. This family includes recessive and dominant disorders. A subfamily of autosomal recessive cerebellar ataxias are characterised by defects in the cellular response to DNA damage. These include the well characterised disorders Ataxia-Telangiectasia (A-T) and Ataxia-Telangiectasia Like Disorder (A-TLD) as well as the recently identified diseases Spinocerebellar ataxia with axonal neuropathy Type 1 (SCAN1), Ataxia with Oculomotor Apraxia Type 2 (AOA2), as well as the subject of this thesis, Ataxia with Oculomotor Apraxia Type 1 (AOA1). AOA1 is caused by mutations in the APTX gene, which is located at chromosomal locus 9p13. This gene codes for the 342 amino acid protein Aprataxin. Mutations in APTX cause destabilization of Aprataxin, thus AOA1 is a result of Aprataxin deficiency. Aprataxin has three functional domains, an N-terminal Forkhead Associated (FHA) phosphoprotein interaction domain, a central Histidine Triad (HIT) nucleotide hydrolase domain and a C-terminal C2H2 zinc finger. Aprataxins FHA domain has homology to FHA domain of the DNA repair protein 5’ polynucleotide kinase 3’ phosphatase (PNKP). PNKP interacts with a range of DNA repair proteins via its FHA domain and plays a critical role in processing damaged DNA termini. The presence of this domain with a nucleotide hydrolase domain and a DNA binding motif implicated that Aprataxin may be involved in DNA repair and that AOA1 may be caused by a DNA repair deficit. This was substantiated by the interaction of Aprataxin with proteins involved in the repair of both single and double strand DNA breaks (XRay Cross-Complementing 1, XRCC4 and Poly-ADP Ribose Polymerase-1) and the hypersensitivity of AOA1 patient cell lines to single and double strand break inducing agents. At the commencement of this study little was known about the in vitro and in vivo properties of Aprataxin. Initially this study focused on generation of recombinant Aprataxin proteins to facilitate examination of the in vitro properties of Aprataxin. Using recombinant Aprataxin proteins I found that Aprataxin binds to double stranded DNA. Consistent with a role for Aprataxin as a DNA repair enzyme, this binding is not sequence specific. I also report that the HIT domain of Aprataxin hydrolyses adenosine derivatives and interestingly found that this activity is competitively inhibited by DNA. This provided initial evidence that DNA binds to the HIT domain of Aprataxin. The interaction of DNA with the nucleotide hydrolase domain of Aprataxin provided initial evidence that Aprataxin may be a DNA-processing factor. Following these studies, Aprataxin was found to hydrolyse 5’adenylated DNA, which can be generated by unscheduled ligation at DNA breaks with non-standard termini. I found that cell extracts from AOA1 patients do not have DNA-adenylate hydrolase activity indicating that Aprataxin is the only DNA-adenylate hydrolase in mammalian cells. I further characterised this activity by examining the contribution of the zinc finger and FHA domains to DNA-adenylate hydrolysis by the HIT domain. I found that deletion of the zinc finger ablated the activity of the HIT domain against adenylated DNA, indicating that the zinc finger may be required for the formation of a stable enzyme-substrate complex. Deletion of the FHA domain stimulated DNA-adenylate hydrolysis, which indicated that the activity of the HIT domain may be regulated by the FHA domain. Given that the FHA domain is involved in protein-protein interactions I propose that the activity of Aprataxins HIT domain may be regulated by proteins which interact with its FHA domain. We examined this possibility by measuring the DNA-adenylate hydrolase activity of extracts from cells deficient for the Aprataxin-interacting DNA repair proteins XRCC1 and PARP-1. XRCC1 deficiency did not affect Aprataxin activity but I found that Aprataxin is destabilized in the absence of PARP-1, resulting in a deficiency of DNA-adenylate hydrolase activity in PARP-1 knockout cells. This implies a critical role for PARP-1 in the stabilization of Aprataxin. Conversely I found that PARP-1 is destabilized in the absence of Aprataxin. PARP-1 is a central player in a number of DNA repair mechanisms and this implies that not only do AOA1 cells lack Aprataxin, they may also have defects in PARP-1 dependant cellular functions. Based on this I identified a defect in a PARP-1 dependant DNA repair mechanism in AOA1 cells. Additionally, I identified elevated levels of oxidized DNA in AOA1 cells, which is indicative of a defect in Base Excision Repair (BER). I attribute this to the reduced level of the BER protein Apurinic Endonuclease 1 (APE1) I identified in Aprataxin deficient cells. This study has identified and characterised multiple DNA repair defects in AOA1 cells, indicating that Aprataxin deficiency has far-reaching cellular consequences. Consistent with the literature, I show that Aprataxin is a nuclear protein with nucleoplasmic and nucleolar distribution. Previous studies have shown that Aprataxin interacts with the nucleolar rRNA processing factor nucleolin and that AOA1 cells appear to have a mild defect in rRNA synthesis. Given the nucleolar localization of Aprataxin I examined the protein-protein interactions of Aprataxin and found that Aprataxin interacts with a number of rRNA transcription and processing factors. Based on this and the nucleolar localization of Aprataxin I proposed that Aprataxin may have an alternative role in the nucleolus. I therefore examined the transcriptional activity of Aprataxin deficient cells using nucleotide analogue incorporation. I found that AOA1 cells do not display a defect in basal levels of RNA synthesis, however they display defective transcriptional responses to DNA damage. In summary, this thesis demonstrates that Aprataxin is a DNA repair enzyme responsible for the repair of adenylated DNA termini and that it is required for stabilization of at least two other DNA repair proteins. Thus not only do AOA1 cells have no Aprataxin protein or activity, they have additional deficiencies in PolyADP Ribose Polymerase-1 and Apurinic Endonuclease 1 dependant DNA repair mechanisms. I additionally demonstrate DNA-damage inducible transcriptional defects in AOA1 cells, indicating that Aprataxin deficiency confers a broad range of cellular defects and highlighting the complexity of the cellular response to DNA damage and the multiple defects which result from Aprataxin deficiency. My detailed characterization of the cellular consequences of Aprataxin deficiency provides an important contribution to our understanding of interlinking DNA repair processes.
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
Bone morphogenetic proteins (BMPs) have been widely investigated for their clinical use in bone repair and it is known that a suitable carrier matrix to deliver them is essential for optimal bone regeneration within a specific defect site. Fused deposited modeling (FDM) allows for the fabrication of medical grade poly 3-caprolactone/tricalcium phosphate (mPCL–TCP) scaffolds with high reproducibility and tailor designed dimensions. Here we loaded FDM fabricated mPCL–TCP/collagen scaffolds with 5 mg recombinant human (rh)BMP-2 and evaluated bone healing within a rat calvarial critical-sized defect. Using a comprehensive approach, this study assessed the newly regenerated bone employing microcomputed tomography (mCT), histology/histomorphometry, and mechanical assessments. By 15 weeks, mPCL–TCP/collagen/rhBMP-2 defects exhibited complete healing of the calvarium whereas the non- BMP-2-loaded scaffolds showed significant less bone ingrowth, as confirmed by mCT. Histomorphometry revealed significantly increased bone healing amongst the rhBMP-2 groups compared to non-treated scaffolds at 4 and 15 weeks, although the % BV/TV did not indicate complete mineralisation of the entire defect site. Hence, our study confirms that it is important to combine microCt and histomorphometry to be able to study bone regeneration comprehensively in 3D. A significant up-regulation of the osteogenic proteins, type I collagen and osteocalcin, was evident at both time points in rhBMP-2 groups. Although mineral apposition rates at 15 weeks were statistically equivalent amongst treatment groups, microcompression and push-out strengths indicated superior bone quality at 15 weeks for defects treated with mPCL–TCP/collagen/rhBMP-2. Consistently over all modalities, the progression of healing was from empty defect < mPCL–TCP/collagen < mPCL–TCP/collagen/rhBMP-2, providing substantiating data to support the hypothesis that the release of rhBMP-2 from FDM-created mPCL–TCP/collagen scaffolds is a clinically relevant approach to repair and regenerate critically-sized craniofacial bone defects. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved.
Resumo:
Poor student engagement and high failure rates in first year units were addressed at the Queensland University of Technology (QUT) with a course restructure involving a fresh approach to introducing programming. Students’ first taste of programming in the new course focused less on the language and syntax, and more on problem solving and design, and the role of programming in relation to other technologies they are likely to encounter in their studies. In effect, several technologies that have historically been compartmentalised and taught in isolation have been brought together as a breadth-first introduction to IT. Incorporating databases and Web development technologies into what used to be a purely programming unit gave students a very short introduction to each technology, with programming acting as the glue between each of them. As a result, students not only had a clearer understanding of the application of programming in the real world, but were able to determine their preference or otherwise for each of the technologies introduced, which will help them when the time comes for choosing a course major. Students engaged well in an intensely collaborative learning environment for this unit which was designed to both support the needs of students and meet industry expectations. Attrition from the unit was low, with computer laboratory practical attendance rates for the first time remaining high throughout semester, and the failure rate falling to a single figure percentage.
Resumo:
This article reports on the first stage of a study that uses Young People as Researchers methodology to investigate the phenomenon of middle-year student disengagement. The study obtains student perspectives on the meanings of engagement and disengagement using a variety of innovative research methods. The first stage of the study focused on a two-day workshop giving students and teachers an overview of the project and providing training and experience in conducting research in their schools. The process employed by the study provides spaces and resources for critical thinking and encourages imaginative responses to the real life problems confronting the students and their peers and affecting their educational engagement. This article describes ways in which engagement is viewed both theoretically and through the empirical work of the student researchers, and how various applications of ‘disciplined imagination’ connect with methods of investigating and understanding engagement.
Resumo:
This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.