958 resultados para Applied current
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.
Resumo:
Effective, statistically robust sampling and surveillance strategies form an integral component of large agricultural industries such as the grains industry. Intensive in-storage sampling is essential for pest detection, Integrated Pest Management (IPM), to determine grain quality and to satisfy importing nation’s biosecurity concerns, while surveillance over broad geographic regions ensures that biosecurity risks can be excluded, monitored, eradicated or contained within an area. In the grains industry, a number of qualitative and quantitative methodologies for surveillance and in-storage sampling have been considered. Primarily, research has focussed on developing statistical methodologies for in storage sampling strategies concentrating on detection of pest insects within a grain bulk, however, the need for effective and statistically defensible surveillance strategies has also been recognised. Interestingly, although surveillance and in storage sampling have typically been considered independently, many techniques and concepts are common between the two fields of research. This review aims to consider the development of statistically based in storage sampling and surveillance strategies and to identify methods that may be useful for both surveillance and in storage sampling. We discuss the utility of new quantitative and qualitative approaches, such as Bayesian statistics, fault trees and more traditional probabilistic methods and show how these methods may be used in both surveillance and in storage sampling systems.
Resumo:
Organizations adopt a Supply Chain Management System (SCMS) expecting benefits to the organization and its functions. However, organizations are facing mounting challenges to realizing benefits through SCMS. Studies suggest a growing dissatisfaction among client organizations due to an increasing gap between expectations and realization of SCMS benefits. Further, reflecting the Enterprise System studies such as Seddon et al. (2010), SCMS benefits are also expected to flow to the organization throughout its lifecycle rather than being realized all at once. This research therefore proposes to derive a lifecycle-wide understanding of SCMS benefits and realization to derive a benefit expectation management framework to attain the full potential of an SCMS. The primary research question of this study is: How can client organizations better manage their benefit expectations of SCM systems? The specific research goals of the current study include: (1) to better understand the misalignment of received and expected benefits of SCM systems; (2) to identify the key factors influencing SCM system expectations and to develop a framework to manage SCMS benefits; (3) to explore how organizational satisfaction is influenced by the lack of SCMS benefit confirmation; and (4) to explore how to improve the realization of SCM system benefits. Expectation-Confirmation Theory (ECT) provides the theoretical underpinning for this study. ECT has been widely used in the consumer behavior literature to study customer satisfaction, post-purchase behavior and service marketing in general. Recently, ECT has been extended into Information Systems (IS) research focusing on individual user satisfaction and IS continuance. However, only a handful of studies have employed ECT to study organizational satisfaction on large-scale IS. The current study will enrich the research stream by extending ECT into organizational-level analysis and verifying the preliminary findings of relevant works by Staples et al. (2002), Nevo and Chan (2007) and Nevo and Wade (2007). Moreover, this study will go further trying to operationalize the constructs of ECT into the context of SCMS. The empirical findings of the study commence with a content analysis, through which 41 vendor reports and academic reports are analyzed yielding sixty expected benefits of SCMS. Then, the expected benefits are compared with the benefits realized at a case organization in the Fast Moving Consumer Goods industry sector that had implemented a SAP Supply Chain Management System seven years earlier. The study develops an SCMS Benefit Expectation Management (SCMS-BEM) Framework. The comparison of benefit expectations and confirmations highlights that, while certain benefits are realized earlier in the lifecycle, other benefits could take almost a decade to realize. Further analysis and discussion on how the developed SCMS-BEM Framework influences ECT when applied in SCMS was also conducted. It is recommended that when establishing their expectations of the SCMS, clients should remember that confirmation of these expectations will have a long lifecycle, as shown in the different time periods in the SCMS-BEM Framework. Moreover, the SCMS-BEM Framework will allow organizations to maintain high levels of satisfaction through careful mitigation and confirming expectations based on the lifecycle phase. In addition, the study reveals that different stakeholder groups have different expectations of the same SCMS. The perspective of multiple stakeholders has significant implications for the application of ECT in the SCMS context. When forming expectations of the SCMS, the collection of organizational benefits of SCMS should represent the perceptions of all stakeholder groups. The same mechanism should be employed in the measurements of received SCMS benefits. Moreover, for SCMS, there exists interdependence of the satisfaction among the various stakeholders. The satisfaction of decision-makers or the authorized staff is not only driven by their own expectation confirmation level, it is also influenced by the confirmation level of other stakeholders‘ expectations in the organization. Satisfaction from any one particular stakeholder group can not reflect the true satisfaction of the client organization. Furthermore, it is inferred from the SCMS-BEM Framework that organizations should place emphasis on the viewpoints of the operational and management staff when evaluating the benefits of SCMS in the short and middle term. At the same time, organizations should be placing more attention on the perspectives of strategic staff when evaluating the performance of the SCMS in the long term.
Resumo:
Current unbalance is a significant power quality problem in distribution networks. This problem increases further with the increased penetration of single-phase photovoltaic cells. In this paper, a new approach is developed for current unbalance reduction in medium voltage distribution networks. The method is based on utilization of three single-phase voltage source converters connected in delta configuration between the phases. Each converter is controlled to function as a varying capacitor. The combination of the load and the compensator will result in a balanced load with unity power factor. The efficacy of the proposed current unbalance reduction concept is verified through dynamic simulations in PSCAD/EMTDC.
Resumo:
Total hip arthroplasty (THA) has a proven clinical record for providing pain relief and return of function to patients with disabling arthritis. There are many successful options for femoral implant design and fixation. Cemented, polished, tapered femoral implants have been shown to have excellent results in national joint registries and long-term clinical series. These implants are usually 150mm long at their lateral aspect. Due to their length, these implants cannot always be offered to patients due to variations in femoral anatomy. Polished, tapered implants as short as 95mm exist, however their small proximal geometry (neck offset and body size) limit their use to smaller stature patients. There is a group of patients in which a shorter implant with a maintained proximal body size would be advantageous. There are also potential benefits to a shorter implant in standard patient populations such as reduced bone removal due to reduced reaming, favourable loading of the proximal femur, and the ability to revise into good proximal bone stock if required. These factors potentially make a shorter implant an option for all patient populations. The role of implant length in determining the stability of a cemented, polished, tapered femoral implant is not well defined by the literature. Before changes in implant design can be made, a better understanding of the role of each region in determining performance is required. The aim of the thesis was to describe how implant length affects the stability of a cemented, polished, tapered femoral implant. This has been determined through an extensive body of laboratory testing. The major findings are that for a given proximal body size, a reduction in implant length has no effect on the torsional stability of a polished, tapered design, while a small reduction in axial stability should be expected. These findings are important because the literature suggests that torsional stability is the major determinant of long-term clinical performance of a THA system. Furthermore, a polished, tapered design is known to be forgiving of cement-implant interface micromotion due to the favourable wear characteristics. Together these findings suggest that a shorter polished, tapered implant may be well tolerated. The effect of a change in implant length on the geometric characteristics of polished, tapered design were also determined and applied to the mechanical testing. Importantly, interface area does play a role in stability of the system; however it is the distribution of the interface and not the magnitude of the area that defines stability. Taper angle (at least in the range of angles seen in this work) was shown not to be a determinant of axial or torsional stability. A range of implants were tested, comparing variations in length, neck offset and indication (primary versus cement-in-cement revision). At their manufactured length, the 125mm implants were similar to their longer 150mm counterparts suggesting that they may be similarly well tolerated in the clinical environment. However, the slimmer cement-in-cement revision implant was shown to have a poorer mechanical performance, suggesting their use in higher demand patients may be hazardous. An implant length of 125mm has been shown to be quite stable and the results suggest that a further reduction to 100mm may be tolerated. However, further work is required. A shorter implant with maintained proximal body size would be useful for the group of patients who are unable to access the current standard length implants due to variations in femoral anatomy. Extending the findings further, the similar function with potential benefits of a shorter implant make their application to all patients appealing.
Resumo:
The serviceability and safety of bridges are crucial to people’s daily lives and to the national economy. Every effort should be taken to make sure that bridges function safely and properly as any damage or fault during the service life can lead to transport paralysis, catastrophic loss of property or even casualties. Nonetheless, aggressive environmental conditions, ever-increasing and changing traffic loads and aging can all contribute to bridge deterioration. With often constrained budget, it is of significance to identify bridges and bridge elements that should be given higher priority for maintenance, rehabilitation or replacement, and to select optimal strategy. Bridge health prediction is an essential underpinning science to bridge maintenance optimization, since the effectiveness of optimal maintenance decision is largely dependent on the forecasting accuracy of bridge health performance. The current approaches for bridge health prediction can be categorised into two groups: condition ratings based and structural reliability based. A comprehensive literature review has revealed the following limitations of the current modelling approaches: (1) it is not evident in literature to date that any integrated approaches exist for modelling both serviceability and safety aspects so that both performance criteria can be evaluated coherently; (2) complex system modelling approaches have not been successfully applied to bridge deterioration modelling though a bridge is a complex system composed of many inter-related bridge elements; (3) multiple bridge deterioration factors, such as deterioration dependencies among different bridge elements, observed information, maintenance actions and environmental effects have not been considered jointly; (4) the existing approaches are lacking in Bayesian updating ability to incorporate a variety of event information; (5) the assumption of series and/or parallel relationship for bridge level reliability is always held in all structural reliability estimation of bridge systems. To address the deficiencies listed above, this research proposes three novel models based on the Dynamic Object Oriented Bayesian Networks (DOOBNs) approach. Model I aims to address bridge deterioration in serviceability using condition ratings as the health index. The bridge deterioration is represented in a hierarchical relationship, in accordance with the physical structure, so that the contribution of each bridge element to bridge deterioration can be tracked. A discrete-time Markov process is employed to model deterioration of bridge elements over time. In Model II, bridge deterioration in terms of safety is addressed. The structural reliability of bridge systems is estimated from bridge elements to the entire bridge. By means of conditional probability tables (CPTs), not only series-parallel relationship but also complex probabilistic relationship in bridge systems can be effectively modelled. The structural reliability of each bridge element is evaluated from its limit state functions, considering the probability distributions of resistance and applied load. Both Models I and II are designed in three steps: modelling consideration, DOOBN development and parameters estimation. Model III integrates Models I and II to address bridge health performance in both serviceability and safety aspects jointly. The modelling of bridge ratings is modified so that every basic modelling unit denotes one physical bridge element. According to the specific materials used, the integration of condition ratings and structural reliability is implemented through critical failure modes. Three case studies have been conducted to validate the proposed models, respectively. Carefully selected data and knowledge from bridge experts, the National Bridge Inventory (NBI) and existing literature were utilised for model validation. In addition, event information was generated using simulation to demonstrate the Bayesian updating ability of the proposed models. The prediction results of condition ratings and structural reliability were presented and interpreted for basic bridge elements and the whole bridge system. The results obtained from Model II were compared with the ones obtained from traditional structural reliability methods. Overall, the prediction results demonstrate the feasibility of the proposed modelling approach for bridge health prediction and underpin the assertion that the three models can be used separately or integrated and are more effective than the current bridge deterioration modelling approaches. The primary contribution of this work is to enhance the knowledge in the field of bridge health prediction, where more comprehensive health performance in both serviceability and safety aspects are addressed jointly. The proposed models, characterised by probabilistic representation of bridge deterioration in hierarchical ways, demonstrated the effectiveness and pledge of DOOBNs approach to bridge health management. Additionally, the proposed models have significant potential for bridge maintenance optimization. Working together with advanced monitoring and inspection techniques, and a comprehensive bridge inventory, the proposed models can be used by bridge practitioners to achieve increased serviceability and safety as well as maintenance cost effectiveness.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a complex 3D deformity of the spine, which may require surgical correction in severe cases. Computer models of the spine provide a potentially powerful tool to virtually ‘test’ various surgical scenarios prior to surgery. Using patient-specific computer models of seven AIS patients who had undergone a single rod anterior procedure, we have recently found that the majority of the deformity correction occurs at the apical joint or the joint immediately cephalic to the apex. In the current paper, we investigate the biomechanics of the apical joint for these patients using clinically measured intra-operative compressive forces applied during implant placement. The aim of this study is to determine a relationship between the compressive joint force applied intra-operatively and the achievable deformity correction at the apical joint.
Resumo:
Outdoor workers are exposed to high levels of ultraviolet radiation (UVR) and may thus be at greater risk to experience UVR-related health effects such as skin cancer, sun burn, and cataracts. A number of intervention trials (n=14) have aimed to improve outdoor workers’ work-related sun protection cognitions and behaviours. Only one study however has reported the use of UV-photography as part of a multi-component intervention. This study was performed in the USA and showed long-term (12 months) improvements in work-related sun protection behaviours. Intervention effects of the other studies have varied greatly, depending on the population studied, intervention applied, and measurement of effect. Previous studies have not assessed whether: - Interventions are similarly effective for workers in stringent and less stringent policy organisations; - Policy effect is translated into workers’ leisure time protection; - Implemented interventions are effective in the long-term; - The facial UV-photograph technique is effective in Australian male outdoor workers without a large additional intervention package, and; - Such interventions will also affect workers’ leisure time sun-related cognitions and behaviours. Therefore, the present Protection of Outdoor Workers from Environmental Radiation [POWER]-study aimed to fill these gaps and had the objectives of: a) assessing outdoor workers’ sun-related cognitions and behaviours at work and during leisure time in stringent and less stringent sun protection policy environments; b) assessing the effect of an appearance-based intervention on workers’ risk perceptions, intentions and behaviours over time; c) assessing whether the intervention was equally effective within the two policy settings; and d) assessing the immediate post-intervention effect. Effectiveness was described in terms of changes in sun-related risk perceptions and intentions (as these factors were shown to be main precursors of behaviour change in many health promotion theories) and behaviour. The study purposefully selected and recruited two organisations with a large outdoor worker contingent in Queensland, Australia within a 40 kilometre radius of Brisbane. The two organisations differed in the stringency of implementation and reinforcement of their organisational sun protection policy. Data were collected from 154 male predominantly Australian born outdoor workers with an average age of 37 years and predominantly medium to fair skin (83%). Sun-related cognitions and behaviours of workers were assessed using self-report questionnaires at baseline and six to twelve months later. Variation in follow-up time was due to a time difference in the recruitment of the two organisations. Participants within each organisation were assigned to an intervention or control group. The intervention group participants received a one-off personalised Skin Cancer Risk Assessment Tool [SCRAT]-letter and a facial UV-photograph with detailed verbal information. This was followed by an immediate post-intervention questionnaire within three months of the start of the study. The control group only received the baseline and follow-up questionnaire. Data were analysed using a variety of techniques including: descriptive analyses, parametric and non-parametric tests, and generalised estimating equations. A 15% proportional difference observed was deemed of clinical significance, with the addition of reported statistical significance (p<0.05) where applicable. Objective 1: Assess and compare the current sun-related risk perceptions, intentions, behaviours, and policy awareness of outdoor workers in stringent and less stringent sun protection policy settings. Workers within the two organisations (stringent n=89 and less stringent n=65) were similar in their knowledge about skin cancer, self efficacy, attitudes, and social norms regarding sun protection at work and during leisure time. Participants were predominantly in favour of sun protection. Results highlighted that compared to workers in a less stringent policy organisation working for an organisation with stringent sun protection policies and practices resulted in more desirable sun protection intentions (less willing to tan p=0.03) ; actual behaviours at work (sufficient use of upper and lower body protection, headgear, and sunglasses (p<0.001 for all comparisons), and greater policy awareness (awareness of repercussions if Personal Protective Equipment (PPE) was not used, p<0.001)). However the effect of the work-related sun protection policy was found not to extend to leisure time sun protection. Objective 2: Compare changes in sun-related risk perceptions, intentions, and behaviours between the intervention and control group. The effect of the intervention was minimal and mainly resulted in a clinically significant reduction in work-related self-perceived risk of developing skin cancer in the intervention compared to the control group (16% and 32% for intervention and control group, respectively estimated their risk higher compared to other outdoor workers: , p=0.11). No other clinical significant effects were observed at 12 months follow-up. Objective 3: Assess whether the intervention was equally effective in the stringent sun protection policy organisation and the less stringent sun protection policy organisation. The appearance-based intervention resulted in a clinically significant improvement in the stringent policy intervention group participants’ intention to protect from the sun at work (workplace*time interaction, p=0.01). In addition to a reduction in their willingness to tan both at work (will tan at baseline: 17% and 61%, p=0.06, at follow-up: 54% and 33%, p=0.07, stringent and less stringent policy intervention group respectively. The workplace*time interaction was significant p<0.001) and during leisure time (will tan at baseline: 42% and 78%, p=0.01, at follow-up: 50% and 63%, p=0.43, stringent and less stringent policy intervention group respectively. The workplace*time interaction was significant p=0.01) over the course of the study compared to the less stringent policy intervention group. However, no changes in actual sun protection behaviours were found. Objective 4: Examine the effect of the intervention on level of alarm and concern regarding the health of the skin as well as sun protection behaviours in both organisations. The immediate post-intervention results showed that the stringent policy organisation participants indicated to be less alarmed (p=0.04) and concerned (p<0.01) about the health of their skin and less likely to show the facial UV-photograph to others (family p=0.03) compared to the less stringent policy participants. A clinically significantly larger proportion of participants from the stringent policy organisation reported they worried more about skin cancer (65%) and skin freckling (43%) compared to those in the less stringent policy organisation (46%,and 23% respectively , after seeing the UV-photograph). In summary the results of this study suggest that the having a stringent work-related sun protection policy was significantly related to for work-time sun protection practices, but did not extend to leisure time sun protection. This could reflect the insufficient level of sun protection found in the general Australian population during leisure time. Alternatively, reactance caused by being restricted in personal decisions through work-time policy could have contributed to lower leisure time sun protection. Finally, other factors could have also contributed to the less than optimal leisure time sun protection behaviours reported, such as unmeasured personal or cultural barriers. All these factors combined may have lead to reduced willingness to take proper preventive action during leisure time exposure. The intervention did not result in any measurable difference between the intervention and control groups in sun protection behaviours in this population, potentially due to the long lag time between the implementation of the intervention and assessment at 12-months follow-up. In addition, high levels of sun protection behaviours were found at baseline (ceiling effect) which left little room for improvement. Further, this study did not assess sunscreen use, which was the predominant behaviour assessed in previous effective appearance-based interventions trials. Additionally, previous trials were mainly conducted in female populations, whilst the POWER-study’s population was all male. The observed immediate post-intervention result could be due to more emphasis being placed on sun protection and risks related to sun exposure in the stringent policy organisation. Therefore participants from the stringent policy organisation could have been more aware of harmful effects of UVR and hence, by knowing that they usually protect adequately, not be as alarmed or concerned as the participants from the less stringent policy organisation. In conclusion, a facial UV-photograph and SCRAT-letter information alone may not achieve large changes in sun-related cognitions and behaviour, especially of assessed 6-12 months after the intervention was implemented and in workers who are already quite well protected. Differences found between workers in the present study appear to be more attributable to organisational policy. However, against a background of organisational policy, this intervention may be a useful addition to sun-related workplace health and safety programs. The study findings have been interpreted while respecting a number of limitations. These have included non-random allocation of participants due to pre-organised allocation of participants to study group in one organisation and difficulty in separating participants from either study group. Due to the transient nature of the outdoor worker population, only 105 of 154 workers available at baseline could be reached for follow-up. (attrition rate=32%). In addition the discrepancy in the time to follow-up assessment between the two organisations was a limitation of the current study. Given the caveats of this research, the following recommendations were made for future research: - Consensus should be reached to define "outdoor worker" in terms of time spent outside at work as well as in the way sun protection behaviours are measured and reported. - Future studies should implement and assess the value of the facial UV-photographs in a wide range of outdoor worker organisations and countries. - More timely and frequent follow-up assessments should be implemented in intervention studies to determine the intervention effect and to identify the best timing of booster sessions to optimise results. - Future research should continue to aim to target outdoor workers’ leisure time cognitions and behaviours and improve these if possible. Overall, policy appears to be an important factor in workers’ compliance with work-time use of sun protection. Given the evidence generated by this research, organisations employing outdoor workers should consider stringent implementation and reinforcement of a sun protection policy. Finally, more research is needed to improve ways to generate desirable behaviour in this population during leisure time.
Resumo:
In Australia, research suggests that up to one quarter of child pedestrian hospitalisations result from driveway run-over incidents (Pinkney et al., 2006). In Queensland, these numbers equate to an average of four child fatalities and 81 children presenting at hospital emergency departments every year (The Commission for Children, Young People and Child Guardian). National comparison shows that these numbers represent a slightly higher per capita rate (23.5% of all deaths). To address this issue, the current research was undertaken with the aim to develop an educative intervention based on data collected from parents and caregivers of young children. Thus, the current project did not seek to use available intervention or educational material, but to develop a new evidence-based intervention specifically targeting driveway run-overs involving young children. To this end, general behavioural and environmental changes that caregivers had undertaken in order to reduce the risk of injury to any child in their care were investigated. Broadly, the first part of this report sought to: • develop a conceptual model of established domestic safety behaviours, and to investigate whether this model could be successfully applied to the driveway setting; • explore and compare sources of knowledge regarding domestic and driveway child safety; and • examine the theoretical implications of current domestic and driveway related behaviour and knowledge among caregivers. The aim of the second part of this research was to develop and test the efficacy of an intervention based on the findings in the first part of the research project. Specifically, it sought to: • develop an educational driveway intervention that is based on current safety behaviours in the domestic setting and informed by existing knowledge of driveway safety and behaviour change theory; and • evaluate its efficacy in a sample of parents and caregivers.
Resumo:
Cultured limbal tissue transplants have become widely used over the last decade as a treatment for limbal stem cell deficiency (LSCD). While the number of patients afflicted with LSCD in Australia and New Zealand is considered to be relatively low, the impact of this disease on quality of life is so severe that the potential efficacy of cultured transplants has necessitated investigation. We presently review the basic biology and experimental strategies associated with the use of cultured limbal tissue transplants in Australia and New Zealand. In doing so, we aim to encourage informed discussion on the issues required to advance the use of cultured limbal transplants in Australia and New Zealand. Moreover, we propose that a collaborative network could be established to maintain access to the technology in conjunction with a number of other existing and emerging treatments for eye diseases.
Resumo:
A planner’s view of the purpose of their actions, the role they play, the focus of their work and in whose interest they operate greatly influence their approach to planning and the outcome of their work. However there is no common and established understanding within the profession on these themes. Contemporary planning theory, practice and education is characterised by the parallel existence of multiple, often contradictory schools of thought. What values and perspectives are held by the next generation of planning professionals as they emerge from contemporary planning programs? This preliminary investigation seeks to identify the views and perspectives of early career planners on the purpose and role of planning, the degree to which planning is oriented on the future and the nature of the public interest, using various schools of planning thought as a thematic framework. In the current phase of a larger project, extant students and recent graduates from planning courses at three Queensland universities were surveyed electronically to ascertain their views, with plans to undertake a broader study of similar populations across Australia. Within the current pilot, students and graduates did not identify strongly with a single school of planning thought, but favoured contrasting rational and collaborative definitions of the role and purpose of planning and the public interest and pragmatic concepts of partial knowledge of the future and the value of experience in managing present issues.
Resumo:
Australian higher education institutions (HEIs) have entered a new phase of regulation and accreditation which includes performance-based funding relating to the participation and retention of students from social and cultural groups previously underrepresented in higher education. However, in addressing these priorities, it is critical that HEIs do not further disadvantage students from certain groups by identifying them for attention because of their social or cultural backgrounds, circumstances which are largely beyond the control of students. In response, many HEIs are focusing effort on university-wide approaches to enhancing the student experience because such approaches will enhance the engagement, success and retention of all students, and in doing so, particularly benefit those students who come from underrepresented groups. Measuring and benchmarking student experiences and engagement that arise from these efforts is well supported by extensive collections of student experience survey data. However no comparable instrument exists that measures the capability of institutions to influence and/or enhance student experiences where capability is an indication of how well an organisational process does what it is designed to do (Rosemann & de Bruin, 2005). We have proposed that the concept of a maturity model (Marshall, 2010; Paulk, 1999) may be useful as a way of assessing the capability of HEIs to provide and implement student engagement, success and retention activities and we are currently articulating a Student Engagement, Success and Retention Maturity Model (SESR-MM), (Clarke, Nelson & Stoodley, 2012; Nelson, Clarke & Stoodley, 2012). Our research aims to address the current gap by facilitating the development of an SESR-MM instrument that aims (i) to enable institutions to assess the capability of their current student engagement and retention programs and strategies to influence and respond to student experiences within the institution; and (ii) to provide institutions with the opportunity to understand various practices across the sector with a view to further improving programs and practices relevant to their context. Our research extends the generational approach which has been useful in considering the evolutionary nature of the first year experience (FYE) (Wilson, 2009). Three generations have been identified and explored: First generation approaches that focus on co-curricular strategies (e.g. orientation and peer programs); Second generation approaches that focus on curriculum (e.g. pedagogy, curriculum design, and learning and teaching practice); and third generation approaches—also referred to as transition pedagogy—that focus on the production of an institution-wide integrated holistic intentional blend of curricular and co-curricular activities (Kift, Nelson & Clarke, 2010). Our research also moves beyond assessments of students’ experiences to focus on assessing institutional processes and their capability to influence student engagement. In essence, we propose to develop and use the maturity model concept to produce an instrument that will indicate the capability of HEIs to manage and improve student engagement, success and retention programs and strategies. The issues explored in this workshop are (i) whether the maturity model concept can be usefully applied to provide a measure of institutional capability for SESR; (ii) whether the SESR-MM can be used to assess the maturity of a particular set of institutional practices; and (iii) whether a collective assessment of an institution’s SESR capabilities can provide an indication of the maturity of the institution’s SESR activities. The workshop will be approached in three stages. Firstly, participants will be introduced to the key characteristics of maturity models, followed by a discussion of the SESR-MM and the processes involved in its development. Secondly, participants will be provided with resources to facilitate the development of a maturity model and an assessment instrument for a range of institutional processes and related practices. In the final stage of the workshop, participants will “assess” the capability of these practices to provide a collective assessment of the maturity of these processes. References Australian Council for Educational Research. (n.d.). Australasian Survey of Student Engagement. Retrieved from http://www.acer.edu.au/research/ausse/background Clarke, J., Nelson, K., & Stoodley, I. (2012, July). The Maturity Model concept as framework for assessing the capability of higher education institutions to address student engagement, success and retention: New horizon or false dawn? A Nuts & Bolts presentation at the 15th International Conference on the First Year in Higher Education, “New Horizons,” Brisbane, Australia. Department of Education, Employment and Workplace Relations. (n.d.). The University Experience Survey. Advancing quality in higher education information sheet. Retrieved from http://www.deewr.gov.au/HigherEducation/Policy/Documents/University_Experience_Survey.pdf Kift, S., Nelson, K., & Clarke, J. (2010) Transition pedagogy - a third generation approach to FYE: A case study of policy and practice for the higher education sector. The International Journal of the First Year in Higher Education, 1(1), pp. 1-20. Marshall, S. (2010). A quality framework for continuous improvement of e-Learning: The e-Learning Maturity Model. Journal of Distance Education, 24(1), 143-166. Nelson, K., Clarke, J., & Stoodley, I. (2012). An exploration of the Maturity Model concept as a vehicle for higher education institutions to assess their capability to address student engagement. A work in progress. Submitted for publication. Paulk, M. (1999). Using the Software CMM with good judgment, ASQ Software Quality Professional, 1(3), 19-29. Wilson, K. (2009, June–July). The impact of institutional, programmatic and personal interventions on an effective and sustainable first-year student experience. Keynote address presented at the 12th Pacific Rim First Year in Higher Education Conference, “Preparing for Tomorrow Today: The First Year as Foundation,” Townsville, Australia. Retrieved from http://www.fyhe.com.au/past_papers/papers09/ppts/Keithia_Wilson_paper.pdf
Resumo:
Abstract The Chinese Emergency Medicine System is primarily composed of three sectors; prehospital care, emergency department in a city hospital, and intensive care unit ward. While all sectors are integral to the system, the prehospital care system is less developed than the others. There are many possible contributors to the under-development of the prehospital care system, however, workforce issues may play a significant role. Firstly, there is no officially recognised paramedic profession in China. The staff members working in the prehospital care system are medical doctors, registered nurses, patient-carriers, and drivers. Secondly, these doctors and nurses are either over-qualified or under-qualified for practicing in the prehospital care system. Lastly, Chinese health professionals have taken actions to improve the current workforce status with initiatives such as short-term training workshops for doctors and nurses, implementation of a trial unit in a university, and development of a Major Degree of Emergency Medicine in a medical university. All of these actions are important steps toward improving the current workforce status in the prehospital care system. However, a long term workforce development plan is still essential for the Chinese system, and implementation of a professional paramedic education system in a medical university/college in China, may provide the solution. Keywords: China; emergency medicine system; health services; prehospital care system; workforce; service delivery
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Background The onsite treatment of sewage and effluent disposal is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. It is important therefore that careful consideration is given to the design and location of onsite sewage treatment systems. This requires an understanding of the factors that influence treatment performance. The use of subsurface absorption systems is the most common form of effluent disposal for onsite sewage treatment, particularly for septic tanks. Also, in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Site specific factors play a key role in the onsite treatment of sewage. The project The primary aims of the research project were: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to evaluate current research relating to onsite sewage treatment; and, • to identify key issues where currently there is a lack of relevant research. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of this research project has been on septic tanks. By implication, the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. Five broad categories of soil types prevalent in the Brisbane region have been considered in this project. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each of the different soil types. In the initial phase of the investigation, the majority of the systems evaluated were septic tanks. However, a small number of aerobic wastewater treatment systems (AWTS) were also included. The primary aim was to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of each different type of system investigated was relatively small. Consequently, this does not permit a statistical analysis to be undertaken of the results obtained for comparing different systems. This is an important issue considering the large number of soil physico-chemical parameters and landscape factors that can influence treatment performance and their wide variability. The report This report is the last in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The project component discussed in the current report outlines the detailed soil investigations undertaken at a selected number of sites. In the initial field sampling, a number of soil chemical properties were assessed as indicators to investigate the extent of effluent flow and to help understand what soil factors renovate the applied effluent. The soil profile attributes, especially texture, structure and moisture regime were examined more in an engineering sense to determine the effect of movement of water into and through the soil. It is important to note that it is not only the physical characteristics, but also the chemical characteristics of the soil as well as landscape factors play a key role in the effluent renovation process. In order to understand the complex processes taking place in a subsurface effluent disposal area, influential parameters were identified using soil chemical concepts. Accordingly, the primary focus of this final phase of the research project was to identify linkages between various soil chemical parameters and landscape patterns and their contribution to the effluent renovation process. The research outcomes will contribute to the development of robust criteria for evaluating the performance of subsurface effluent disposal systems. The outcomes The key findings from the soil investigations undertaken are: • Effluent renovation is primarily undertaken by a combination of various soil physico-chemical parameters and landscape factors, thereby making the effluent renovation processes strongly site dependent. • Decisions regarding site suitability for effluent disposal should not be based purely in terms of the soil type. A number of other factors such as the site location in the catena, the drainage characteristics and other physical and chemical characteristics, also exert a strong influence on site suitability. • Sites, which are difficult to characterise in terms of suitability for effluent disposal, will require a detailed soil physical and chemical analysis to be undertaken to a minimum depth of at least 1.2 m. • The Ca:Mg ratio and Exchangeable Sodium Percentage are important parameters in soil suitability assessment. A Ca:Mg ratio of less than 0.5 would generally indicate a high ESP. This in turn would mean that Na and possibly Mg are the dominant exchangeable cations, leading to probable clay dispersion. • A Ca:Mg ratio greater than 0.5 would generally indicate a low ESP in the profile, which in turn indicates increased soil stability. • In higher clay percentage soils, low ESP can have a significant effect. • The presence of high exchangeable Na can be counteracted by the presence of swelling clays, and an exchange complex co-dominated by exchangeable Ca and exchangeable Mg. This aids absorption of cations at depth, thereby reducing the likelihood of dispersion. • Salt is continually added to the soil by the effluent and problems may arise if the added salts accumulate to a concentration that is harmful to the soil structure. Under such conditions, good drainage is essential in order to allow continuous movement of water and salt through the profile. Therefore, for a site to be sustainable, it would have a maximum application rate of effluent. This would be dependent on subsurface characteristics and the surface area available for effluent disposal. • The dosing regime for effluent disposal can play a significant role in the prevention of salt accumulation in the case of poorly draining sites. Though intermittent dosing was not considered satisfactory for the removal of the clogging mat layer, it has positive attributes in the context of removal of accumulated salts in the soil.