883 resultados para Related measures
Resumo:
Background Expectations held by patients and health professionals may affect treatment choices and participation (by both patients and health professionals) in therapeutic interventions in contemporary patient-centered healthcare environments. If patients in rehabilitation settings overestimate their discharge health-related quality of life, they may become despondent as their progress falls short of their expectations. On the other hand, underestimating their discharge health-related quality of life may lead to a lack of motivation to participate in therapies if they do not perceive likely benefit. There is a scarcity of empirical evidence evaluating whether patients' expectations of future health states are accurate. The purpose of this study is to evaluate the accuracy with which older patients admitted for subacute in-hospital rehabilitation can anticipate their discharge health-related quality of life. Methods A prospective longitudinal cohort investigation of agreement between patients' anticipated discharge health-related quality of life (as reported on the EQ-5D instrument at admission to a rehabilitation unit) and their actual self-reported health-related quality of life at the time of discharge from this unit was undertaken. The mini-mental state examination was used as an indicator of patients' cognitive ability. Results Overall, 232(85%) patients had all assessment data completed and were included in analysis. Kappa scores ranged from 0.42-0.68 across the five EQ-5D domains and two patient cognition groups. The percentage of exact correct matches within each domain ranged from 69% to 85% across domains and cognition groups. Overall 40% of participants in each cognition group correctly anticipated all of their self-reported discharge EQ-5D domain responses. Conclusions Patients admitted for subacute in-hospital rehabilitation were able to anticipate the discharge health-related quality of life on the EQ-5D instrument with a moderate level of accuracy. This finding adds to the foundational empirical work supporting joint treatment decision making and patient-centered models of care during rehabilitation following acute illness or injury. Accurate patient expectations of the impact of treatment (or disease progression) on future health-related related quality of life is likely to allow patients and health professionals to successfully target interventions to priority areas where meaningful gains can be achieved.
Resumo:
Neighbourhood like the concept of liveability is usually measured by either subjective indicators using surveys of residents’ perceptions or by objective means using secondary data or relative weights for objective indicators of the urban environment. Rarely, have objective and subjective indicators been related to one another in order to understand what constitutes a liveable urban neighbourhood both spatially and behaviourally. This paper explores the use of qualitative (diaries, in-depth interviews) and quantitative (Global Positioning Systems, Geographical Information Systems mapping) liveability research data to examine the perceptions and behaviour of 12 older residents living in six high density urban areas of Brisbane. Older urban Australians are one of the two principal groups highly attracted to high density urban living. The strength of the relationship between the qualitative and quantitative measures was examined. Results of the research indicate a weak relationship between subjective and objective indicators. Linking the two methods (quantitative and qualitative) is important in obtaining a greater understanding of human behaviour and the lived world of older urban Australians and in providing a wider picture of the urban neighbourhood.
Resumo:
Prior studies linking performance management systems (PMS) and organisational justice have examined how PMS influence procedural fairness. Our investigation differs from these studies. First, it examines fairness as an antecedent (instead of as a consequence) of the choice of PMS. Second, instead of conceptualising organisational fairness as procedural fairness, it relies on the impression management interpretation of organisational fairness. Hence, the study investigates how the need of senior managers to cultivate an impression of being fair is related to the choice of PMS systems and employee outcomes. Based on a sample of 276 employees, the results indicate that the need of senior management to cultivate an impression of being fair is associated with employee performance. They also indicate that a substantial component of these effects is indirect through the choice of comprehensive performance measures (CPM) and employee job satisfaction. These findings highlight the importance of organisational concern for workplace fairness as an antecedent of choice of CPM. From a theoretical perspective, the adoption of the impression management interpretation of organisational fairness contributes by providing new insights into the relationship between fairness and choice of PMS from a perspective that is different from those used in prior management accounting research.
Resumo:
Australian higher education institutions (HEIs) have entered a new phase of regulation and accreditation which includes performance-based funding relating to the participation and retention of students from social and cultural groups previously underrepresented in higher education. However, in addressing these priorities, it is critical that HEIs do not further disadvantage students from certain groups by identifying them for attention because of their social or cultural backgrounds, circumstances which are largely beyond the control of students. In response, many HEIs are focusing effort on university-wide approaches to enhancing the student experience because such approaches will enhance the engagement, success and retention of all students, and in doing so, particularly benefit those students who come from underrepresented groups. Measuring and benchmarking student experiences and engagement that arise from these efforts is well supported by extensive collections of student experience survey data. However no comparable instrument exists that measures the capability of institutions to influence and/or enhance student experiences where capability is an indication of how well an organisational process does what it is designed to do (Rosemann & de Bruin, 2005). We have proposed that the concept of a maturity model (Marshall, 2010; Paulk, 1999) may be useful as a way of assessing the capability of HEIs to provide and implement student engagement, success and retention activities and we are currently articulating a Student Engagement, Success and Retention Maturity Model (SESR-MM), (Clarke, Nelson & Stoodley, 2012; Nelson, Clarke & Stoodley, 2012). Our research aims to address the current gap by facilitating the development of an SESR-MM instrument that aims (i) to enable institutions to assess the capability of their current student engagement and retention programs and strategies to influence and respond to student experiences within the institution; and (ii) to provide institutions with the opportunity to understand various practices across the sector with a view to further improving programs and practices relevant to their context. Our research extends the generational approach which has been useful in considering the evolutionary nature of the first year experience (FYE) (Wilson, 2009). Three generations have been identified and explored: First generation approaches that focus on co-curricular strategies (e.g. orientation and peer programs); Second generation approaches that focus on curriculum (e.g. pedagogy, curriculum design, and learning and teaching practice); and third generation approaches—also referred to as transition pedagogy—that focus on the production of an institution-wide integrated holistic intentional blend of curricular and co-curricular activities (Kift, Nelson & Clarke, 2010). Our research also moves beyond assessments of students’ experiences to focus on assessing institutional processes and their capability to influence student engagement. In essence, we propose to develop and use the maturity model concept to produce an instrument that will indicate the capability of HEIs to manage and improve student engagement, success and retention programs and strategies. The issues explored in this workshop are (i) whether the maturity model concept can be usefully applied to provide a measure of institutional capability for SESR; (ii) whether the SESR-MM can be used to assess the maturity of a particular set of institutional practices; and (iii) whether a collective assessment of an institution’s SESR capabilities can provide an indication of the maturity of the institution’s SESR activities. The workshop will be approached in three stages. Firstly, participants will be introduced to the key characteristics of maturity models, followed by a discussion of the SESR-MM and the processes involved in its development. Secondly, participants will be provided with resources to facilitate the development of a maturity model and an assessment instrument for a range of institutional processes and related practices. In the final stage of the workshop, participants will “assess” the capability of these practices to provide a collective assessment of the maturity of these processes. References Australian Council for Educational Research. (n.d.). Australasian Survey of Student Engagement. Retrieved from http://www.acer.edu.au/research/ausse/background Clarke, J., Nelson, K., & Stoodley, I. (2012, July). The Maturity Model concept as framework for assessing the capability of higher education institutions to address student engagement, success and retention: New horizon or false dawn? A Nuts & Bolts presentation at the 15th International Conference on the First Year in Higher Education, “New Horizons,” Brisbane, Australia. Department of Education, Employment and Workplace Relations. (n.d.). The University Experience Survey. Advancing quality in higher education information sheet. Retrieved from http://www.deewr.gov.au/HigherEducation/Policy/Documents/University_Experience_Survey.pdf Kift, S., Nelson, K., & Clarke, J. (2010) Transition pedagogy - a third generation approach to FYE: A case study of policy and practice for the higher education sector. The International Journal of the First Year in Higher Education, 1(1), pp. 1-20. Marshall, S. (2010). A quality framework for continuous improvement of e-Learning: The e-Learning Maturity Model. Journal of Distance Education, 24(1), 143-166. Nelson, K., Clarke, J., & Stoodley, I. (2012). An exploration of the Maturity Model concept as a vehicle for higher education institutions to assess their capability to address student engagement. A work in progress. Submitted for publication. Paulk, M. (1999). Using the Software CMM with good judgment, ASQ Software Quality Professional, 1(3), 19-29. Wilson, K. (2009, June–July). The impact of institutional, programmatic and personal interventions on an effective and sustainable first-year student experience. Keynote address presented at the 12th Pacific Rim First Year in Higher Education Conference, “Preparing for Tomorrow Today: The First Year as Foundation,” Townsville, Australia. Retrieved from http://www.fyhe.com.au/past_papers/papers09/ppts/Keithia_Wilson_paper.pdf
Resumo:
Climate change presents risks to health that must be addressed by both decision-makers and public health researchers. Within the application of Environmental Health Impact Assessment (EHIA), there have been few attempts to incorporate climate change-related health risks as an input to the framework. This study used a focus group design to examine the perceptions of government, industry and academic specialists about the suitability of assessing the health consequences of climate change within an EHIA framework. Practitioners expressed concern over a number of factors relating to the current EHIA methodology and the inclusion of climate change-related health risks. These concerns related to the broad scope of issues that would need to be considered, problems with identifying appropriate health indicators, the lack of relevant qualitative information that is currently incorporated in assessment and persistent issues surrounding stakeholder participation. It was suggested that improvements are needed in data collection processes, particularly in terms of adequate communication between environmental and health practitioners. Concerns were raised surrounding data privacy and usage, and how these could impact on the assessment process. These findings may provide guidance for government and industry bodies to improve the assessment of climate change-related health risks.
Resumo:
Exposures to traffic-related air pollution (TRAP) can be particularly high in transport microenvironments (i.e. in and around vehicles) despite the short durations typically spent there. There is a mounting body of evidence that suggests that this is especially true for fine (b2.5 μm) and ultrafine (b100 nm, UF) particles. Professional drivers, who spend extended periods of time in transport microenvironments due to their job, may incur exposures markedly higher than already elevated non-occupational exposures. Numerous epidemiological studies have shown a raised incidence of adverse health outcomes among professional drivers, and exposure to TRAP has been suggested as one of the possible causal factors. Despite this, data describing the range and determinants of occupational exposures to fine and UF particles are largely conspicuous in their absence. Such information could strengthen attempts to define the aetiology of professional drivers' illnesses as it relates to traffic combustion-derived particles. In this article, we suggest that the drivers' occupational fine and UF particle exposures are an exemplar case where opportunities exist to better link exposure science and epidemiology in addressing questions of causality. The nature of the hazard is first introduced, followed by an overview of the health effects attributable to exposures typical of transport microenvironments. Basic determinants of exposure and reduction strategies are also described, and finally the state of knowledge is briefly summarised along with an outline of the main unanswered questions in the topic area.
Resumo:
While the justice implications of climate change are well understood by the international climate regime, solutions to meaningfully address climate injustice are still emerging. This article explores how a number of different theories of justice have influenced the development of international climate regime policies and measures. Such analysis is undertaken by examining the theories of remedial justice, environmental justice, energy justice, social justice and international justice. This article demonstrates how each of these theories has influenced the development of international climate policies or measures. No one theory of justice has the ability to respond to the multifaceted justice implications that arise as a result of climate change. It is argued that a variety of lenses of justice are useful when examining issues of injustice in the climate context. It is believed that articulating the justice implications of climate change by reference to theories of justice assists in clarifying the key issues giving rise to injustice. This article finds that while there has been some progress by the regime in recognising the injustices associated with climate change, such recognition is piecemeal and the implementation of many of the policies and measures discussed within this article needs to be either scaled up, or extended into more far-reaching policies and measures to overcome climate justice concerns. Overall it is suggested that climate justice concerns need to be clearly enunciated within key adaptation instruments so as to provide a legal and legitimate basis upon which to leverage action.
Resumo:
There is a notable shortage of empirical research directed at measuring the magnitude and direction of stress effects on performance in a controlled environment. One reason for this is the inherent difficulties in identifying and isolating direct performance measures for individuals. Additionally, most traditional work environments contain a multitude of exogenous factors impacting individual performance, but controlling for all such factors is generally unfeasible (omitted variable bias). Moreover, instead of asking individuals about their self-reported stress levels, we observe workers’ behaviour in situations that can be classified as stressful. For this reason, we have stepped outside the traditional workplace in an attempt to gain greater controllability of these factors using the sports environment as our experimental space. We empirically investigate the relationship between stress and performance, in an extreme pressure situation (football penalty kicks) in a winner take all sporting environment (FIFA World Cup and UEFA European Cup competitions). Specifically, we examine all the penalty shootouts between 1976 and 2008 covering in total 16 events. The results indicate that extreme stressors can have a positive or negative impact on individuals’ performance. On the other hand, more commonly experienced stressors do not affect professionals’ performances.
Resumo:
Objective: To assess the symptoms of heat illness experienced by surface mine workers. Methods: Ninety-one surface mine workers across three mine sites in northern Australia completed a heat stress questionnaire evaluating their symptoms for heat illness. A cohort of 56 underground mine workers also participated for comparative purposes. Participants were allocated into asymptomatic, minor or moderate heat illness categories depending on the number of symptoms they reported. Participants also reported the frequency of symptom experience, as well as their hydration status (average urine colour). Results: Heat illness symptoms were experienced by 87 and 79 % of surface and underground mine workers, respectively (p = 0.189), with 81–82 % of the symptoms reported being experienced by miners on more than one occasion. The majority (56 %) of surface workers were classified as experiencing minor heat illness symptoms, with a further 31 % classed as moderate; 13 % were asymptomatic. A similar distribution of heat illness classification was observed among underground miners (p = 0.420). Only 29 % of surface miners were considered well hydrated, with 61 % minimally dehydrated and 10 % significantly dehydrated, proportions that were similar among underground miners (p = 0.186). Heat illness category was significantly related to hydration status (p = 0.039) among surface mine workers, but only a trend was observed when data from surface and underground miners was pooled (p = 0.073). Compared to asymptomatic surface mine workers, the relative risk of experiencing minor and moderate symptoms of heat illness was 1.5 and 1.6, respectively, when minimally dehydrated. Conclusions: These findings show that surface mine workers routinely experience symptoms of heat illness and highlight that control measures are required to prevent symptoms progressing to medical cases of heat exhaustion or heat stroke.
Resumo:
Introduction: Sleepiness contributes to a substantial proportion of fatal and severe road crashes. Efforts to reduce the incidence of sleep-related crashes have largely focussed on driver education to promote self-regulation of driving behaviour. However, effective self-regulation requires accurate self-perception of sleepiness. The aim of this study was to assess capacity to accurately identify sleepiness, and self-regulate driving cessation, during a validated driving simulator task. Methods: Participants comprised 26 young adult drivers (20-28 years) who had open licenses. No other exclusion criteria where used. Participants were partially sleep deprived (05:00 wake up) and completed a laboratory-based hazard perception driving simulation, counterbalanced to either at mid-morning or mid-afternoon. Established physiological measures (i.e., EEG, EOG) and subjective measures (Karolinska Sleepiness Scale), previously found sensitive to changes in sleepiness levels, were utilised. Participants were instructed to ‘drive’ on the simulator until they believed that sleepiness had impaired their ability to drive safely. They were then offered a nap opportunity. Results: The mean duration of the drive before cessation was 36.1 minutes (±17.7 minutes). Subjective sleepiness increased significantly from the beginning (KSS=6.6±0.7) to the end (KSS=8.2±0.5) of the driving period. No significant differences were found for EEG spectral power measures of sleepiness (i.e., theta or alpha spectral power) from the start of the driving task to the point of cessation of driving. During the nap opportunity, 88% of the participants (23/26) were able to reach sleep onset with an average latency of 9.9 minutes (±7.5 minutes). The average nap duration was 15.1 minutes (±8.1 minutes). Sleep architecture during the nap was predominately comprised of Stages I and II (combined 92%). Discussion: Participants reported high levels of sleepiness during daytime driving after very moderate sleep restriction. They were able to report increasing sleepiness during the test period despite no observed change in standard physiological indices of sleepiness. This increased subjective sleepiness had behavioural validity as the participants had high ‘napability’ at the point of driving cessation, with most achieving some degree of subsequent sleep. This study suggests that the nature of a safety instruction (i.e. how to view sleepiness) can be a determinant of driver behaviour.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Background Hallux valgus (HV) is a very common deformity of the first metatarsophalangeal joint that often requires surgical correction. However, the association between structural HV deformity and related foot pain and disability is unclear. Furthermore, no previous studies have investigated concerns about appearance and difficulty with footwear in a population with HV not seeking surgical correction. The aim of this cross-sectional study was to investigate foot pain, functional limitation, concern about appearance and difficulty with footwear in otherwise healthy adults with HV compared to controls. Methods Thirty volunteers with HV (radiographic HV angle >15 degrees) and 30 matched controls were recruited for this study (50 women, 10 men; mean age 44.4 years, range 20 to 76 years). Differences between groups were examined for self-reported foot pain and disability, satisfaction with appearance, footwear difficulty, and pressure-pain threshold at the first metatarsophalangeal joint. Functional measures included balance tests, walking performance, and hallux muscle strength (abduction and plantarflexion). Mean differences (MD) and 95% confidence intervals (CI) were calculated. Results All self-report measures showed that HV was associated with higher levels of foot pain and disability and significant concerns about appearance and footwear (p < 0.001). Lower pressure-pain threshold was measured at the medial first metatarsophalangeal joint in participants with HV (MD = -133.3 kPa, CI: -251.5 to -15.1). Participants with HV also showed reduced hallux plantarflexion strength (MD = -37.1 N, CI: -55.4 to -18.8) and abduction strength (MD = -9.8 N, CI: -15.6 to -4.0), and increased mediolateral sway when standing with both feet with eyes closed (MD = 0.34 cm, CI: 0.04 to 0.63). Conclusions These findings show that HV negatively impacts on self-reported foot pain and function, and concerns about foot appearance and footwear in otherwise healthy adults. There was also evidence of impaired hallux muscle strength and increased postural sway in HV subjects compared to controls, although general physical functioning and participation in physical activity were not adversely affected.
Resumo:
Members of the World Trade Organisation (WTO) are obliged to implement the Agreement on Trade-related Intellectual Property Rights 1994 (TRIPS) which establishes minimum standards for the protection and enforcement of intellectual property rights. Almost two decades after TRIPS was adopted at the conclusion of the Uruguay Round of trade negotiations, it is widely accepted that intellectual property systems in developing and least-developed countries must be consistent with, and serve, their development needs and objectives. In adopting the Development Agenda in 2007, the World Intellectual Property Organisation (WIPO) emphasised the importance to developing and least-developed countries of being able to obtain access to knowledge and technology and to participate in collaborations and exchanges with research and scientific institutions in other countries. Access to knowledge, information and technology is crucial if creativity and innovation is to be fostered in developing and least-developed countries. It is particularly important that developing and least-developed countries give effect to their TRIPS obligations by implementing intellectual property systems and adopting intellectual property management practices that enable them to benefit from knowledge flows and support their engagement in international research and science collaborations. However, developing and least-developed countries did not participate in the deliberations leading to the adoption in 2004 by Organisation for Economic Co-operation and Development (OECD) member countries of the Ministerial Declaration on Access to Research Data from Public Funding, nor have they formulated policies on access to publicly funded research outputs such as those developed by the National Institutes of Health in the United States, the United Kingdom Research Councils or the Australian National Health and Medical Research Council. These issues are considered from the viewpoint of Malaysia, a developing country whose economy has grown strongly in recent years. Lacking an established policy covering access to the outputs of publicly funded research, data sharing and licensing practices continue to be fragmented. Obtaining access to research data requires arrangements to be negotiated with individual data owners and custodians. Given the potential for restrictions on access to impact negatively on scientific progress and development in Malaysia, measures are required to ensure that access to knowledge and research results is facilitated. This paper proposes a policy framework for Malaysia‘s public research universities that recognises intellectual property rights while enabling the open access to research data that is essential for innovation and development. It also considers how intellectual property rights in research data can be managed in order to give effect to the policy‘s open access objectives.
Resumo:
Noncompliance with speed limits is one of the major safety concerns in roadwork zones. Although numerous studies have attempted to evaluate the effectiveness of safety measures on speed limit compliance, many report inconsistent findings. This paper aims to review the effectiveness of four categories of roadwork zone speed control measures: Informational, Physical, Enforcement, and Educational measures. While informational measures (static signage, variable message signage) evidently have small to moderate effects on speed reduction, physical measures (rumble strips, optical speed bars) are found ineffective for transient and moving work zones. Enforcement measures (speed camera, police presence) have the greatest effects, while educational measures also have significant potential to improve public awareness of roadworker safety and to encourage slower speeds in work zones. Inadequate public understanding of roadwork risks and hazards, failure to notice signs, and poor appreciation of safety measures are the major causes of noncompliance with speed limits.
Resumo:
Purpose: Clinical studies suggest that foot pain may be problematic in one-third of patients in early disease. The Foot Health Status Questionnaire (FHSQ) was developed and validated to evaluate the effectiveness of conservative (orthoses, taping, stretching) and surgery interventions. Despite this fact, there are few validated instruments that measure foot health status in Spanish. Thus, the primary aim of the current study was to translate and evaluate psychometrically a Spanish version of FHSQ. Methods: A cross-sectional study was designed in a university community-based podiatric clinic located in south of Spain. All participants (n = 107) recruited consecutively completed a Spanish version of FHSQ and EuroQoL Health Questionnaire 5 dimensions, and 29 participants repeated these same measures 48 h later. Data analysis included test–retest reliability, construct and criterion-related validity and factor analyses. Results: Construct validity was appropriate with moderate-to-high corrected item–subscale correlations (α = ≥0.739) for all subscales. Test–retest reliability was satisfactory (ICC > 0.932). Factor analysis revealed four dimensions with 86.6 % of the common variance explained. The confirmatory factor analysis findings demonstrated that the proposed structure was well supported (comparative fit index = 0.92, standardized root mean square = 0.09). The Spanish EuroQoL 5D score negatively correlated with the FHSQ pain (r = −0.445) and positively with general foot health and function (r = 0.261 − 0.579), confirming criterion-related validity. Conclusion: The clinimetric properties of the Spanish version of FHSQ were satisfactory.