861 resultados para Good laboratory practices
Resumo:
In this paper we consider the place of early childhood literacy in the discursive construction of the identity( ies) of ‘proper’ parents. Our analysis crosses between representations of parenting in texts produced by commercial and government/public institutional interests and the self-representations of individual parents in interviews with the researchers. The argument is made that there are commonalities and disjunctures in represented and lived parenting identities as they relate to early literacy. In commercial texts that advertise educational and other products, parents are largely absent from representations and the parent’s position is one of consumer on behalf of the child. In government-sanctioned texts, parents are very much present and are positioned as both learners about and important facilitators of early learning when they ‘interact’ with their children around language and books. The problem for which both, in their different ways, offer a solution is the ‘‘not-yet-ready’’ child precipitated into the evaluative environment of school without the initial competence seen as necessary to avoid falling behind right from the start. Both kinds of producers promise a smooth induction of children into mainstream literacy and learning practices if the ‘good parent’ plays her/his part. Finally, we use two parent cases to illustrate how parents’ lived practice involves multiple discursive practices and identities as they manage young children’s literacy and learning in family contexts in which they also need to negotiate relations with their partners and with paid and domestic work.
Resumo:
Understanding the relationship between diet, physical activity and health in humans requires accurate measurement of body composition and daily energy expenditure. Stable isotopes provide a means of measuring total body water and daily energy expenditure under free-living conditions. While the use of isotope ratio mass spectrometry (IRMS) for the analysis of 2H (Deuterium) and 18O (Oxygen-18) is well established in the field of human energy metabolism research, numerous questions remain regarding the factors which influence analytical and measurement error using this methodology. This thesis was comprised of four studies with the following emphases. The aim of Study 1 was to determine the analytical and measurement error of the IRMS with regard to sample handling under certain conditions. Study 2 involved the comparison of TEE (Total daily energy expenditure) using two commonly employed equations. Further, saliva and urine samples, collected at different times, were used to determine if clinically significant differences would occur. Study 3 was undertaken to determine the appropriate collection times for TBW estimates and derived body composition values. Finally, Study 4, a single case study to investigate if TEE measures are affected when the human condition changes due to altered exercise and water intake. The aim of Study 1 was to validate laboratory approaches to measure isotopic enrichment to ensure accurate (to international standards), precise (reproducibility of three replicate samples) and linear (isotope ratio was constant over the expected concentration range) results. This established the machine variability for the IRMS equipment in use at Queensland University for both TBW and TEE. Using either 0.4mL or 0.5mL sample volumes for both oxygen-18 and deuterium were statistically acceptable (p>0.05) and showed a within analytical variance of 5.8 Delta VSOW units for deuterium, 0.41 Delta VSOW units for oxygen-18. This variance was used as “within analytical noise” to determine sample deviations. It was also found that there was no influence of equilibration time on oxygen-18 or deuterium values when comparing the minimum (oxygen-18: 24hr; deuterium: 3 days) and maximum (oxygen-18: and deuterium: 14 days) equilibration times. With regard to preparation using the vacuum line, any order of preparation is suitable as the TEE values fall within 8% of each other regardless of preparation order. An 8% variation is acceptable for the TEE values due to biological and technical errors (Schoeller, 1988). However, for the automated line, deuterium must be assessed first followed by oxygen-18 as the automated machine line does not evacuate tubes but merely refills them with an injection of gas for a predetermined time. Any fractionation (which may occur for both isotopes), would cause a slight elevation in the values and hence a lower TEE. The purpose of the second and third study was to investigate the use of IRMS to measure the TEE and TBW of and to validate the current IRMS practices in use with regard to sample collection times of urine and saliva, the use of two TEE equations from different research centers and the body composition values derived from these TEE and TBW values. Following the collection of a fasting baseline urine and saliva sample, 10 people (8 women, 2 men) were dosed with a doubly labeled water does comprised of 1.25g 10% oxygen-18 and 0.1 g 100% deuterium/kg body weight. The samples were collected hourly for 12 hrs on the first day and then morning, midday, and evening samples were collected for the next 14 days. The samples were analyzed using an isotope ratio mass spectrometer. For the TBW, time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibration was reached in 90% of the sample by hour 6, and in 100% of the sample by hour 7. With regard to the TBW estimations, the optimal time for urine collection was found to be between hours 4 and 10 as to where there was no significant difference between values. In contrast, statistically significant differences in TBW estimations were found between hours 1-3 and from 11-12 when compared with hours 4-10. Most of the individuals in this study were in equilibrium after 7 hours. The TEE equations of Prof Dale Scholler (Chicago, USA, IAEA) and Prof K.Westerterp were compared with that of Prof. Andrew Coward (Dunn Nutrition Centre). When comparing values derived from samples collected in the morning and evening there was no effect of time or equation on resulting TEE values. The fourth study was a pilot study (n=1) to test the variability in TEE as a result of manipulations in fluid consumption and level of physical activity; the magnitude of change which may be expected in a sedentary adult. Physical activity levels were manipulated by increasing the number of steps per day to mimic the increases that may result when a sedentary individual commences an activity program. The study was comprised of three sub-studies completed on the same individual over a period of 8 months. There were no significant changes in TBW across all studies, even though the elimination rates changed with the supplemented water intake and additional physical activity. The extra activity may not have sufficiently strenuous enough and the water intake high enough to cause a significant change in the TBW and hence the CO2 production and TEE values. The TEE values measured show good agreement based on the estimated values calculated on an RMR of 1455 kcal/day, a DIT of 10% of TEE and activity based on measured steps. The covariance values tracked when plotting the residuals were found to be representative of “well-behaved” data and are indicative of the analytical accuracy. The ratio and product plots were found to reflect the water turnover and CO2 production and thus could, with further investigation, be employed to identify the changes in physical activity.
Resumo:
In early 2011, the Australian Learning and Teaching Council Ltd (ALTC) commissioned a series of Good Practice Reports on completed ALTC projects and fellowships. This report will: • Provide a summative evaluation of the good practices and key outcomes for teaching and learning from completed ALTC projects and fellowships relating to blended learning • Include a literature review of the good practices and key outcomes for teaching and learning from national and international research • Identify areas in which further work or development are appropriate. The literature abounds with definitions; it can be argued that the various definitions incorporate different perspectives, but there is no single, collectively accepted definition. Blended learning courses in higher education can be placed somewhere on a continuum, between fully online and fully face-to-face courses. Consideration must therefore be given to the different definitions for blended learning presented in the literature and by users and stakeholders. The application of this term in these various projects and fellowships is dependent on the particular focus of the team and the conditions and situations under investigation. One of the key challenges for projects wishing to develop good practice in blended learning is the lack of a universally accepted definition. The findings from these projects and fellowships reveal the potential of blended learning programs to improve both student outcomes and levels of satisfaction. It is clear that this environment can help teaching and learning engage students more effectively and allow greater participation than traditional models. Just as there are many definitions, there are many models and frameworks that can be successfully applied to the design and implementation of such courses. Each academic discipline has different learning objectives and in consequence there can’t be only one correct approach. This is illustrated by the diversity of definitions and applications in the ALTC funded projects and fellowships. A review of the literature found no universally accepted guidelines for good practice in higher education. To inform this evaluation and literature review, the Seven Principles for Good Practice in Undergraduate Education, as outlined by Chickering and Gamson (1987), were adopted: 1. encourages contacts between students and faculty 2. develops reciprocity and cooperation among students 3. uses active learning techniques 4. gives prompt feedback 5. emphasises time on task 6. communicates high expectations 7. respects diverse talents and ways of learning. These blended learning projects have produced a wide range of resources that can be used in many and varied settings. These resources include: books, DVDs, online repositories, pedagogical frameworks, teaching modules. In addition there is valuable information contained in the published research data and literature reviews that inform good practice and can assist in the development of courses that can enrich and improve teaching and learning.
Resumo:
1.1 Background What is renewable energy education and training? A cursory exploration of the International Solar Energy Society website (www.ises.org) reveals numerous references to education and training, referring collectively to concepts of the transfer and exchange of information and good practices, awareness raising and skills development. The purposes of such education and training relate to changing policy, stimulating industry, improving quality control and promoting the wider use of renewable energy sources. The primary objective appears to be to accelerate a transition to a better world for everyone (ISEE), as the greater use of renewable energy is seen as key to climate recovery; world poverty alleviation; advances in energy security, access and equality; improved human and environmental health; and a stabilized society. The Solar Cities project – Habitats of Tomorrow – aims at promoting the greater use of renewable energy within the context of long term planning for sustainable urban development. The focus is on cities or communities as complete systems; each one a unique laboratory allowing for the study of urban sustainability within the context of a low carbon lifestyle. The purpose of this paper is to report on an evaluation of a Solar Community in Australia, focusing specifically on the implications (i) for our understandings and practices in renewable energy education and training and (ii) for sustainability outcomes. 1.2 Methodology The physical context is a residential Ecovillage (a Solar Community) in sub-tropical Queensland, Australia (latitude 28o south). An extensive Architectural and Landscape Code (A&LC) ‘premised on the interconnectedness of all things’ and embracing ‘both local and global concerns’ governs the design and construction of housing in the estate: all houses are constructed off-ground (i.e. on stumps or stilts) and incorporate a hybrid approach to the building envelope (mixed use of thermal mass and light-weight materials). Passive solar design, gas boosted solar water heaters and a minimum 1kWp photovoltaic system (grid connected) are all mandatory, whilst high energy use appliances such as air conditioners and clothes driers are not permitted. Eight families participated in an extended case study that encompassed both quantitative and qualitative approaches to better understand sustainable housing (perceived as a single complex technology) through its phases of design, construction and occupation. 1.3 Results The results revealed that the level of sustainability (i.e. the performance outcomes in terms of a low-carbon lifestyle) was impacted on by numerous ‘players’ in the supply chain, such as architects, engineers and subcontractors, the housing market, the developer, product manufacturers / suppliers / installers and regulators. Three key factors were complicit in the level of success: (i) systems thinking; (ii) informed decision making; and (iii) environmental ethics and business practices. 1.4 Discussion The experiences of these families bring into question our understandings and practices with regard to education and training. Whilst increasing and transferring knowledge and skills is essential, the results appear to indicate that there is a strong need for expanding our education efforts to incorporate foundational skills in complex systems and decision making processes, combined with an understanding of how our individual and collective values and beliefs impact on these systems and processes.
Resumo:
This paper reports on the development of a good practice guide that will offer the higher education sector a framework for safeguarding student learning engagement. The good practice guide and framework are underpinned by a set of principles initially identified as themes in the social justice literature which were refined following the consolidation of data collected from eight selected “good practice” Australasian universities and feedback gathered at various forums and presentations. The good practice guide will provide the sector with examples of institutional wide efforts which respond to national priorities for student retention and will also provide exemplars of institutional practices for each principle to facilitate the uptake of sector-wide good practice. Participants will be provided with the opportunity to discuss the social justice principles, the draft good practice guide and identify the practical applications of the guide within individual institutions.
Resumo:
This good practice report, commissioned by the ALTC, provides a summative evaluation of useful outcomes and good practices from ALTC projects and fellowships on curriculum renewal. The report contains: -a summative evaluation of the good practices and key outcomes for teaching and learning from completed ALTC projects and fellowships -a literature review of the good practices and key outcomes for teaching and learning from national and international research -the proposed outcomes and resources for teaching and learning which will be produced by current incomplete ALTC projects and fellowships -identifies areas in which further work or development are appropriate.
Resumo:
The combined impact of social class, cultural background and experience upon early literacy achievement in the first year of schooling is among the most durable questions in educational research. Links have been established between social class and achievement but literacy involves complex social and cognitive practices that are not necessarily reflected in the connections that have been made. The complexity of relationships between social class, cultural background and experience, and their impact on early literacy achievement have received little research attention. Recent refinements of the broad terms of social class or socioeconomic status have questioned the established links between social class and achievement. Nevertheless, it remains difficult to move beyond deficit and mismatch models of explaining and understanding the underperformance of children from lower socioeconomic and cultural minority groups when conventional measures are used. The data from an Australian pilot study reported here add to the increasing evidence that income is not necessarily related directly to home literacy resources or to how those resources are used. Further, the data show that the level of print resources in the home may not be a good indicator of the level of use of those resources.
Resumo:
This paper argues that if journalism is to remain a relevant and dynamic academic discipline, it must urgently reconsider the constrained, heavily-policed boundaries traditionally placed around it (particularly in Australia). A simple way of achieving this is to redefine its primary object of study: away from specific, rigid, professional inputs, towards an ever-growing range of media outputs. Such a shift may allow the discipline to freely re-assess its pedagogical and epistemological relationships to contemporary newsmaking practices (or, the ‘new’ news).
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Objectives: To identify and appraise the literature concerning nurse-administered procedural sedation and analgesia in the cardiac catheter laboratory. Design and data sources: An integrative review method was chosen for this study. MEDLINE and CINAHL databases as well as The Cochrane Database of Systematic Reviews and the Joanna Briggs Institute were searched. Nineteen research articles and three clinical guidelines were identified. Results: The authors of each study reported nurse-administered sedation in the CCL is safe due to the low incidence of complications. However, a higher percentage of deeply sedated patients were reported to experience complications than moderately sedated patients. To confound this issue, one clinical guideline permits deep sedation without an anaesthetist present, while others recommend against it. All clinical guidelines recommend nurses are educated about sedation concepts. Other findings focus on pain and discomfort and the cost-savings of nurse-administered sedation, which are associated with forgoing anaesthetic services. Conclusions: Practice is varied due to limitations in the evidence and inconsistent clinical practice guidelines. Therefore, recommendations for research and practice have been made. Research topics include determining how and in which circumstances capnography can be used in the CCL, discerning the economic impact of sedation-related complications and developing a set of objectives for nursing education about sedation. For practice, if deep sedation is administered without an anaesthetist present, it is essential nurses are adequately trained and have access to vital equipment such as capnography to monitor ventilation because deeply sedated patients are more likely to experience complications related to sedation. These initiatives will go some way to ensuring patients receiving nurse-administered procedural sedation and analgesia for a procedure in the cardiac catheter laboratory are cared for using consistent, safe and evidence-based practices.
Resumo:
The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe. However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists’ preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses’ perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs. This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence-based and consistent care to the many patients who undergo procedures in this setting. In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
Aim To develop clinical practice guidelines for nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory. Background Numerous studies have reported that nurse-administered procedural sedation and analgesia is safe. However, the broad scope of existing guidelines for the administration and monitoring of patients who receive sedation during medical procedures without an anaesthetist presents means there is a lack of specific guidance regarding optimal nursing practices for the unique circumstances in which nurse-administered procedural sedation and analgesia is used in the cardiac catheterisation laboratory. Methods A sequential mixed methods design was utilised. Initial recommendations were produced from three studies conducted by the authors: an integrative review; a qualitative study; and a cross-sectional survey. The recommendations were revised in accordance with responses from a modified Delphi study. The first Delphi round was completed by nine senior cardiac catheterisation laboratory nurses. All but one of the draft recommendations met the pre-determined cut-off point for inclusion. There were a total of 59 responses to the second round. Consensus was reached on all recommendations. Implications for nursing The guidelines that were derived from the Delphi study offer twenty four recommendations within six domains of nursing practice: Pre-procedural assessment; Pre-procedural patient and family education; Pre-procedural patient comfort; Intra-procedural patient comfort; Intra-procedural patient assessment and monitoring; and Post-procedural patient assessment and monitoring. Conclusion These guidelines provide an important foundation towards the delivery of safe, consistent and evidence-based nursing care for the many patients who receive sedation in the cardiac catheterisation laboratory setting.
Resumo:
Da Nang Airbase in Viet Nam served as a bulk storage and supply facility for Agent Orange and other herbicides during Operation Ranch Hand 1961-1971[1]. Studies have shown that environmental and biological samples taken around the airbase site have elevated levels of dioxin [1-3]. Residents living in the vicinity of the airbase are at risk of exposure to dioxin in soil, water and mud and particularly through the consumption of local contaminated food. In 2009, a pre-intervention cross sectional survey was undertaken. This survey examined the knowledge, attitudes and practices (KAP) of householders living near Da Nang Airbase, relevent to reducing dioxin exposure through contaminated food. The results showed that despite living near a severe dioxin hot spot, the residents had very limited knowledge of both exposure risk and measures to reduce exposure to dioxin[4]. In response, the Vietnam Public Health Association (VPHA) and Da Nang Public Health Association implemented a risk reduction program at four residential wards in the vicinities of the Da Nang Airbase in 2010. A post intervention KAP survey was under taken in 2011, and the results showed that knowledge of the existence of dioxin in food, dioxin exposure pathways, potential high risk foods, and preventive measures was significantly enhanced. This new study monitored KAP 2.5 years after the intervention through a 2013 survey of food handlers from 400 households that were randomly selected from the four intervention wards. The results show that most of the positive outcomes remained stable or had increased; some KAP indicators decreased compared to those in the post-intervention survey, but were still significantly higher than the pre-intervention levels. In 2014, these findings will be incorporated with qualitative assessments and the results of laboratory analysis of dioxin concentrations in foods in Da Nang and Bien Hoa dioxin hot spots to comprehensively assess the sustained effects of the intervention.
Resumo:
Visual information is central to several of the scientific disciplines. This paper studies how scientists working in a multidisciplinary field produce scientific evidence through building and manipulating scientific visualizations. Using ethnographic methods, we studied visualization practices of eight scientists working in the domain of tissue engineering research. Tissue engineering is an upcoming field of research that deals with replacing or regenerating human cells, tissues, or organs to restore or establish normal function. We spent 3 months in the field, where we recorded laboratory sessions of these scientists and used semi-structured interviews to get an insight into their visualization practices. From our results, we elicit two themes characterizing their visualization practices: multiplicity and physicality. In this article, we provide several examples of scientists’ visualization practices to describe these two themes and show that multimodality of such practices plays an important role in scientific visualization.