302 resultados para FUNCTIONING
Resumo:
Ratchetting failure of railhead material adjacent to endpost which is placed in the air gap between the two rail ends at insulated rail joints causes significant economic problems to the railway operators who rely on the proper functioning of these joints for train control using the signalling track circuitry. The ratchetting failure is a localised problem and is very difficult to predict even when complex analytical methods are employed. This paper presents a novel experimental technique that enables measurement of the progressive ratchetting. A special purpose test rig was developed for this purpose and commissioned by the Centre for Railway Engineering at Central Queensland University. The rig also provides the capability of testing of the wheel/rail rolling contract conditions. The results provide confidence that accurate measurement of the localised failure of railhead material can be achieved using the test rig.
Resumo:
In the course of history, a large number of politicians have been assassinated. To investigate this phenomenon, rational choice hypotheses are developed and tested using a large data set covering close to 100 countries over a period of 20 years. Several strategies, in addition to security measures, are shown to significantly reduce the probability of politicians being attacked or killed: extended institutional and governance quality, democracy, voice and accountability, a well-functioning system of law and order, decentralization via the division of power and federalism, larger cabinet size and a stronger civil society. There is also support for a contagion effect.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.
Resumo:
Background Hallux valgus (HV) is a very common deformity of the first metatarsophalangeal joint that often requires surgical correction. However, the association between structural HV deformity and related foot pain and disability is unclear. Furthermore, no previous studies have investigated concerns about appearance and difficulty with footwear in a population with HV not seeking surgical correction. The aim of this cross-sectional study was to investigate foot pain, functional limitation, concern about appearance and difficulty with footwear in otherwise healthy adults with HV compared to controls. Methods Thirty volunteers with HV (radiographic HV angle >15 degrees) and 30 matched controls were recruited for this study (50 women, 10 men; mean age 44.4 years, range 20 to 76 years). Differences between groups were examined for self-reported foot pain and disability, satisfaction with appearance, footwear difficulty, and pressure-pain threshold at the first metatarsophalangeal joint. Functional measures included balance tests, walking performance, and hallux muscle strength (abduction and plantarflexion). Mean differences (MD) and 95% confidence intervals (CI) were calculated. Results All self-report measures showed that HV was associated with higher levels of foot pain and disability and significant concerns about appearance and footwear (p < 0.001). Lower pressure-pain threshold was measured at the medial first metatarsophalangeal joint in participants with HV (MD = -133.3 kPa, CI: -251.5 to -15.1). Participants with HV also showed reduced hallux plantarflexion strength (MD = -37.1 N, CI: -55.4 to -18.8) and abduction strength (MD = -9.8 N, CI: -15.6 to -4.0), and increased mediolateral sway when standing with both feet with eyes closed (MD = 0.34 cm, CI: 0.04 to 0.63). Conclusions These findings show that HV negatively impacts on self-reported foot pain and function, and concerns about foot appearance and footwear in otherwise healthy adults. There was also evidence of impaired hallux muscle strength and increased postural sway in HV subjects compared to controls, although general physical functioning and participation in physical activity were not adversely affected.
Resumo:
Background and Aims: Falls and fall-related injuries result in reduced functioning, loss of independence, premature nursing home admissions and mortality. Malnutrition is associated with falls in the acute setting, but little is known about malnutrition and falls risk in the community. The aim of this study was to assess the association between malnutrition risk, falls risk and falls over a one-year period in community-dwelling older adults. Methods: Two hundred and fifty four subjects >65 years of age were recruited to participate in a study in order to identify risk factors for falls. Malnutrition risk was determined using the Mini Nutritional Assessment–Short Form. Results: 28.6% had experienced a fall and according to the Mini Nutritional Assessment-Short Form 3.9% (n=10) of subjects were at risk of malnutrition. There were no associations between malnutrition risk, the risk of falls, nor actual falls in healthy older adults in the community setting. Conclusions: There was a low prevalence of malnutrition risk in this sample of community-dwelling older adults and no association between nutritional risk and falls. Screening as part of a falls prevention program should focus on the risk of developing malnutrition as this is associated with falls.
Resumo:
ILLITERACY is now increasingly recognised as a serious social problem. UNESCO defines literacy in the following way :- "A person is literate when he has acquired the essential knowledge skills that enable him to engage in all those activities in which literacy is required for effective functioning in his group and community" This is in fact seeing the problem in terms of functional literacy. As the demands of an increasingly industrial society grow, more and more people who are functionally illiterate are appearing. Many do not have the functional skills required to enable them to apply for a job. This inability to obtain work is common among clients of the probation service. Literacy has become so important in our society, that to be unable to read and write causes great feeling of isolation, of being different and inferior, which often leads the illiterate to join a group where this deficiency is unknown and where he can gain some status. This is often a delinquent group.
Resumo:
Background: To derive preference-based measures from various condition-specific descriptive health-related quality of life (HRQOL) measures. A general 2-stage method is evolved: 1) an item from each domain of the HRQOL measure is selected to form a health state classification system (HSCS); 2) a sample of health states is valued and an algorithm derived for estimating the utility of all possible health states. The aim of this analysis was to determine whether confirmatory or exploratory factor analysis (CFA, EFA) should be used to derive a cancer-specific utility measure from the EORTC QLQ-C30. Methods: Data were collected with the QLQ-C30v3 from 356 patients receiving palliative radiotherapy for recurrent or metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter based on a conceptual model (the established domain structure of the QLQ-C30: physical, role, emotional, social and cognitive functioning, plus several symptoms) and clinical considerations (views of both patients and clinicians about issues relevant to HRQOL in cancer). The dimensions determined by each method were then subjected to item response theory, including Rasch analysis. Results: CFA results generally supported the proposed conceptual model, with residual correlations requiring only minor adjustments (namely, introduction of two cross-loadings) to improve model fit (increment χ2(2) = 77.78, p < .001). Although EFA revealed a structure similar to the CFA, some items had loadings that were difficult to interpret. Further assessment of dimensionality with Rasch analysis aligned the EFA dimensions more closely with the CFA dimensions. Three items exhibited floor effects (>75% observation at lowest score), 6 exhibited misfit to the Rasch model (fit residual > 2.5), none exhibited disordered item response thresholds, 4 exhibited DIF by gender or cancer site. Upon inspection of the remaining items, three were considered relatively less clinically important than the remaining nine. Conclusions: CFA appears more appropriate than EFA, given the well-established structure of the QLQ-C30 and its clinical relevance. Further, the confirmatory approach produced more interpretable results than the exploratory approach. Other aspects of the general method remain largely the same. The revised method will be applied to a large number of data sets as part of the international and interdisciplinary project to develop a multi-attribute utility instrument for cancer (MAUCa).
Resumo:
The aim of this paper was to investigate the association between appetite and Kidney-Disease Specific Quality of Life in maintenance hemodialysis patients. Quality of Life (QoL) was measured using the Kidney Disease Quality Of Life survey. Appetite was measured using self-reported categories and a visual analog scale. Other nutritional parameters included Patient-Generated Subjective Global Assessment (PGSGA), dietary intake, body mass index and biochemical markers C-Reactive Protein and albumin. Even in this well nourished sample (n=62) of hemodialysis patients, PGSGA score (r=-0.629), subjective hunger sensations (r=0.420) and body mass index (r=-0.409) were all significantly associated with the Physical Health Domain of QoL. As self-reported appetite declined, QoL was significantly lower in nine domains which were mostly in the SF36 component and covered social functioning and physical domains. Appetite and other nutritional parameters were not as strongly associated with the Mental Health domain and Kidney Disease Component Summary Domains. Nutritional parameters, especially PGSGA score and appetite, appear to be important components of the physical health domain of QoL. As even small reductions in nutritional status were associated with significantly lower QoL scores, monitoring appetite and nutritional status is an important component of care for hemodialysis patients.
Resumo:
In March 2010, Brisbane Festival commissioned a Research Team, led by Dr Bree Hadley and Dr Sandra Gattenhof, Creative Industries Faculty, Queensland University of Technology, to conduct an evaluation of the Creating Queensland program, a new Creative Communities partnership between Brisbane Festival and the Australia Council for the Arts. This Final Report reviews and reports on the effectiveness of the program gathered during three phases throughout 2010: Phase 1, in which the research team analysed Brisbane Festival’s pre-existing data on the Creating Queensland events in 2009; Phase 2, in which the research team designed a new suite of instruments to gather data from producers, producing partners, artists and attendees involved in the Creating Queensland events in 2010; and Phase 3, in which the research team used content analysis of the narratives emerging in the data to establish how Brisbane Festival has adopted processes, activities or engagement protocols to operate as catalysts that produce experiences with specific impacts on individuals and communities. The Final Report finds that the Creating Queensland events concentrate on developing specific experiences for those involved – usually associated with storytelling, showcasing, and the valorisation or re-valorisation of neglected or forgotten cultural forms – in order to give communities a voice. It finds that the events prioritise accessibility – usually associated with allowing specific local communities or local artists to present material that is meaningful to them – and inclusivity – usually associated with using connections with producing partners (such as the Multicultural Development Association) to bring more and more people into the program. It finds that the events have a capacity-building effect, which allows local communities to increase their capacity to launch their own ideas, initiatives or events, allows individuals to increase their employability, or allows communities and individuals to increase their visibility within mainstream cultural practices and infrastructure. The Final Report further finds that Brisbane Festival has, throughout its years of commitment to community programming, developed specific techniques to enable events in the Creating Queensland program to have these effects, that these can be tracked, and, as a result, deployed or redeployed both by Brisbane Festival and other community arts organisations in the development of effective community arts programs. The data demonstrates that Creating Queensland is, by and large, having the desired effect on communities – people are actually participating, presenting work, and increasing their personal, professional and social skills in various ways, and this is valued by all stakeholders. The data also demonstrates that, as would be expected with any community arts program – particularly programs of this size and complexity – there are areas in which Creating Queensland is functioning exceptionally well and areas in which continuous improvement processes should be continued. Areas of excellence relate to Brisbane Festival’s longstanding commitment to community arts, and active community participation in the arts, as well as its ability to create well-known and loved programs that use effective techniques to have a positive impact on communities. Areas for improvement relate to Brisbane Festival’s potential to benefit from the following: clarifying relationships between community participants and professionals; increasing mentoring relationships between these groups; consolidating the discourses it uses to describe event aims across strategic, production, and publicity documents across the years; and re-considering the number of small events inside the larger Creating Queensland program.
Resumo:
This study assessed the health-related quality of life (HRQoL), fatigue and physical activity levels of 28 persons with chronic kidney disease (CKD) on initial administration of an erythropoietin stimulating agent, and at 3 months, 6 months and 12 months. The sample comprised of 15 females and 13 males whose ages ranged from 31 to 84 years. Physical activity was measured using the Human Activity Profile (HAP): Self-care, Personal/Household work, Entertainment/Social, Independent exercise. Quality of life was measured using the SF-36 which gives scores on physical health (physical functioning, role-physical, bodily pain and general health) and mental health (vitality, social functioning, role-emotional and emotional well-being). Fatigue was measured by the Fatigue Severity Scale (FSS). Across all time points the renal sample engaged in considerably less HAP personal/household work activities and entertainment/social activities compared to healthy adults. The normative sample engaged in three times more independent/exercise activities compared to renal patients. One-way Repeated measures ANOVAs indicated a significant change over time for SF-36 scales of role physical, vitality, emotional well-being and overall mental health. There was a significant difference in fatigue levels over time [F(3,11) = 3.78, p<.05]. Fatigue was highest at baseline and lowest at 6 months. The more breathlessness the CKD patient reported, the fewer activities undertaken and the greater the reported level of fatigue. There were no significant age differences over time for fatigue or physical activity. Age differences were only found for SF-36 mental health at 3 months (t=-2.41, df=14, p<.05). Those younger than 65 years had lower emotional well-being compared to those aged over 65. Males had poorer physical health compared to females at 12 months. There were no significant gender differences on mental health at any time point. In the management of chronic kidney disease, early detection of a person’s inability to engage in routine activities due to fatigue is necessary. Early detection would enable timely interventions to optimise HRQoL and independent exercise.
Resumo:
The publication of the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric Association, 1994) introduced the notion that a life-threatening illness can be a stressor and catalyst for Posttraumatic Stress Disorder (PTSD). Since then a solid body of research has been established investigating the post-diagnosis experience of cancer. These studies have identified a number of short and long-term life changes resulting from a diagnosis of cancer and associated treatments. In this chapter, we discuss the psychosocial response to the cancer experience and the potential for cancer-related distress. Cancer can represent a life-threatening diagnosis that may be associated with aggressive treatments and result in physical and psychological changes. The potential for future trauma through the lasting effects of the disease and treatment, and the possibility of recurrence, can be a source of continued psychological distress. In addition to the documented adverse repercussions of cancer, we also outline the recent shift that has occurred in the psycho-oncology literature regarding positive life change or posttraumatic growth that is commonly reported after a diagnosis of cancer. Adopting a salutogenic framework acknowledges that the cancer experience is a dynamic psychosocial process with both negative and positive repercussions. Next, we describe the situational and individual factors that are associated with posttraumatic growth and the types of positive life change that are prevalent in this context. Finally, we discuss the implications of this research in a therapeutic context and the directions of future posttraumatic growth research with cancer survivors. This chapter will present both quantitative and qualitative research that indicates the potential for personal growth from adversity rather than just mere survival and return to pre-diagnosis functioning. It is important to emphasise however, that the presence of growth and prevalence of resilience does not negate the extremely distressing nature of a cancer diagnosis for the patient and their families and the suffering that can accompany treatment regimes. Indeed, it will be explained that for growth to occur, the experience must be one that quite literally shatters previously held schemas in order to act as a catalyst for change.
Resumo:
Objective: Substance use is common in first-episode psychosis, and complicates the accurate diagnosis and treatment of the disorder. The differentiation of substance-induced psychotic disorders (SIPD) from primary psychotic disorders (PPD) is particularly challenging. This cross-sectional study compares the clinical, substance use and functional characteristics of substance using first episode psychosis patients diagnosed with a SIPD and PPD. Method: Participants were 61 young people (15-24 years) admitted to a psychiatric inpatient service with first episode psychosis, reporting substance use in the past month. Diagnosis was determined using the Psychiatric Research Interview for DSM-IV Substance and Mental disorders (PRISM-IV). Measures of clinical (severity of psychotic symptoms, level of insight, history of trauma), substance use (frequency/quantity, severity) and social and occupational functioning were also administered. Results: The PRISM-IV differentially diagnosed 56% of first episode patients with a SIPD and 44% with a PPD. Those with a SIPD had higher rates of substance use and disorders, higher levels of insight, were more likely to have a forensic and trauma history and had more severe hostility and anxious symptoms than those with a PPD. Logistic regression analysis indicated a family history of psychosis, trauma history and current cannabis dependence were the strongest predictors of a SIPD. Almost 80% of diagnostic predictions of a SIPD were accurate using this model. Conclusions: This clinical profile of SIPD could help to facilitate the accurate diagnosis and treatment of SIPD versus PPD in young people with first episode psychosis admitted to an inpatient psychiatric service.
Resumo:
Objective: Excessive alcohol consumption is common among people with psychotic disorders. While there is an extensive literature on the efficacy of psychological treatments for excessive drinking, few studies have examined interventions addressing this issue among people with psychotic disorders. Method: Systematic searches in PubMed and PsycINFO were conducted to identify randomized controlled trials comparing manual guided psychological interventions for excessive alcohol consumption among individuals with psychotic disorders. Of the 429 articles identified, 7 met inclusion criteria. Data were extracted from each study regarding study sample characteristics, design, results, clinical significance of alcohol consumption results, and methodological limitations. Results: Assessment interviews, brief motivational interventions and lengthier cognitive behavior therapy have been associated with reductions in alcohol consumption among people with psychosis. While brief interventions (i.e., 1-2 sessions) were generally as effective as longer duration psychological interventions (i.e., 10 session) for reducing alcohol consumption, longer interventions provided additional benefits for depression, functioning and other alcohol outcomes. Conclusion: Excessive alcohol consumption among people with psychotic disorders is responsive to psychological interventions. It is imperative that such approaches are integrated within standard care for people with psychosis.
Resumo:
Background: The high rates of comorbid depression and substance use in young people have been associated with a range of adverse outcomes. Yet, few treatment studies have been conducted with this population. Objective: To determine if the addition of Motivational Interviewing and Cognitive Behaviour Therapy (MI/CBT) to standard alcohol and other drug (AOD) care improves the outcomes of young people with comorbid depression and substance use. Participants and Setting: Participants comprised 88 young people with comorbid depression (Kessler 10 score of > 17) and substance use (mainly alcohol/cannabis) seeking treatment at two youth AOD services in Melbourne, Australia. Sixty young people received MI/CBT in addition to standard care (SC) and 28 received SC alone. Outcomes Measures: Primary outcome measures were depressive symptoms and drug and alcohol use in the past month. Assessments were conducted at baseline, 3 and 6 months follow up. Results and Conclusions: The addition of MI/CBT to SC was associated with a significantly greater rate of change in depression, cannabis use, motivation to change substance use and social contact in the first 3 months. However, those who received SC had achieved similar improvements on these variables by 6 months follow up. All young people achieved significant improvements in functioning and quality of life variables over time, regardless of the treatment group. No changes in alcohol or other drug use were found in either group. The delivery of MI/CBT in addition to standard AOD care may offer accelerated treatment gains in the short-term.