174 resultados para Failure to Thrive
Resumo:
Advanced glycation endproducts (AGEs) have been implicated in the pathogenesis of cancer, inflammatory conditions and diabetic complications. An interaction of AGEs with their receptor (RAGE) results in increased release of pro-inflammatory cytokines and reactive oxygen species (ROS), causing damage to susceptible tissues. Laminitis, a debilitating foot condition of horses, occurs in association with endocrine dysfunction and the potential involvement of AGE and RAGE in the pathogenesis of the disease has not been previously investigated. Glucose transport in lamellar tissue is thought to be largely insulin-independent (GLUT-1), which may make the lamellae susceptible to protein glycosylation and oxidative stress during periods of increased glucose metabolism. Archived lamellar tissue from horses with insulin-induced laminitis (n=4), normal control horses (n=4) and horses in the developmental stages (6 h, 12 h and 24 h) of the disease (n=12) was assessed for AGE accumulation and the presence of oxidative protein damage and cellular lipid peroxidation. The equine-specific RAGE gene was identified in lamellar tissue, sequenced and is now available on GenBank. Lamellar glucose transporter (GLUT-1 and GLUT-4) gene expression was assessed quantitatively with qRT-PCR in laminitic and control horses and horses in the mid-developmental time-point (24 h) of the disease. Significant AGE accumulation had occurred by the onset of insulin-induced laminitis (48 h) but not at earlier time-points, or in control horses. Evidence of oxidative stress was not found in any group. The equine-specific RAGE gene was not expressed differently in treated and control animals, nor was the insulin-dependent glucose transporter GLUT-4. However, the glucose transporter GLUT-1 was increased in lamellar tissue in the developmental stages of insulin-induced laminitis compared to control horses and the insulin-independent nature of the lamellae may facilitate AGE formation. However, due to the lack of AGE accumulation during disease development and a failure to detect an increase in ROS or upregulation of RAGE, it appears unlikely that oxidative stress and protein glycosylation play a central role in the pathogenesis of acute, insulin-induced laminitis.
Resumo:
This thesis is an ethical and empirical exploration of the late discovery of genetic origins in two contexts, adoption and sperm donor-assisted conception. This exploration has two interlinked strands of concern. The first is the identification of ‘late discovery’ as a significant issue of concern, deserving of recognition and acknowledgment. The second concerns the ethical implications of late discovery experiences for the welfare of the child. The apparently simple act of recognition of a phenomenon is a precondition to any analysis and critique of it. This is especially important when the phenomenon arises out of social practices that arouse significant debate in ethical and legal contexts. As the new reproductive technologies and some adoption practices remain highly contested, an ethical exploration of this long neglected experience has the potential to offer new insights and perspectives in a range of contexts. It provides an opportunity to revisit developmental debate on the relative merit or otherwise of biological versus social influences, from the perspective of those who have lived this dichotomy in practise. Their experiences are the human face of the effects arising from decisions taken by others to intentionally separate their biological and social worlds, an action which has then been compounded by family and institutional secrecy from birth. This has been accompanied by a failure to ensure that normative standards and values are upheld for them. Following discovery, these factors can be exacerbated by a lack of recognition and acknowledgement of their concerns by family, friends, community and institutions. Late discovery experiences offer valuable insights to inform discussions on the ethical meanings of child welfare, best interests, parental responsibility, duty of care and child identity rights in this and other contexts. They can strengthen understandings of what factors are necessary for a child to be able to live a reasonably happy or worthwhile life.
Resumo:
A recent District Court case is believed to be the first in Queensland in which UCPR r 5 has been used to support the setting aside of a regularly entered default judgment without a costs order.
Resumo:
Nonprofits constitute a large part of collective behaviour in society. Presently there is little formal research addressing the role of audits in nonprofit organisations. Before models can be developed for the production of nonprofit auditing information, it is necessary to examine the present conduct of nonprofit audits. The Australian Accounting Research Foundation - Legislation Review Board has released a position paper on the Association Incorporation Acts in Australia - the most frequently used legal form for nonprofit organisations. The Board is addressing the issue of financial statement reporting including audit. This is coinciding with the investigations resulting from the collapse of the National Safety Council (Victorian Division), (NSC). The NSC, a nonprofit organisation formed as a company limited by guarantee, is in liquidation and the auditors are being sued for damages resulting from their alleged failure to perform their duties adequately.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Noncompliance with speed limits is one of the major safety concerns in roadwork zones. Although numerous studies have attempted to evaluate the effectiveness of safety measures on speed limit compliance, many report inconsistent findings. This paper aims to review the effectiveness of four categories of roadwork zone speed control measures: Informational, Physical, Enforcement, and Educational measures. While informational measures (static signage, variable message signage) evidently have small to moderate effects on speed reduction, physical measures (rumble strips, optical speed bars) are found ineffective for transient and moving work zones. Enforcement measures (speed camera, police presence) have the greatest effects, while educational measures also have significant potential to improve public awareness of roadworker safety and to encourage slower speeds in work zones. Inadequate public understanding of roadwork risks and hazards, failure to notice signs, and poor appreciation of safety measures are the major causes of noncompliance with speed limits.
Resumo:
Maize streak virus (MSV; Genus Mastrevirus, Family Geminiviridae) occurs throughout Africa, where it causes what is probably the most serious viral crop disease on the continent. It is obligately transmitted by as many as six leafhopper species in the Genus Cicadulina, but mainly by C. mbila Naudé and C. storeyi. In addition to maize, it can infect over 80 other species in the Family Poaceae. Whereas 11 strains of MSV are currently known, only the MSV-A strain is known to cause economically significant streak disease in maize. Severe maize streak disease (MSD) manifests as pronounced, continuous parallel chlorotic streaks on leaves, with severe stunting of the affected plant and, usuallly, a failure to produce complete cobs or seed. Natural resistance to MSV in maize, and/or maize infections caused by non-maize-adapted MSV strains, can result in narrow, interrupted streaks and no obvious yield losses. MSV epidemiology is primarily governed by environmental influences on its vector species, resulting in erratic epidemics every 3-10 years. Even in epidemic years, disease incidences can vary from a few infected plants per field, with little associated yield loss, to 100% infection rates and complete yield loss. Taxonomy: The only virus species known to cause MSD is MSV, the type member of the Genus Mastrevirus in the Family Geminiviridae. In addition to the MSV-A strain, which causes the most severe form of streak disease in maize, 10 other MSV strains (MSV-B to MSV-K) are known to infect barley, wheat, oats, rye, sugarcane, millet and many wild, mostly annual, grass species. Seven other mastrevirus species, many with host and geographical ranges partially overlapping those of MSV, appear to infect primarily perennial grasses. Physical properties: MSV and all related grass mastreviruses have single-component, circular, single-stranded DNA genomes of approximately 2700 bases, encapsidated in 22 × 38-nm geminate particles comprising two incomplete T = 1 icosahedra, with 22 pentameric capsomers composed of a single 32-kDa capsid protein. Particles are generally stable in buffers of pH 4-8. Disease symptoms: In infected maize plants, streak disease initially manifests as minute, pale, circular spots on the lowest exposed portion of the youngest leaves. The only leaves that develop symptoms are those formed after infection, with older leaves remaining healthy. As the disease progresses, newer leaves emerge containing streaks up to several millimetres in length along the leaf veins, with primary veins being less affected than secondary or tertiary veins. The streaks are often fused laterally, appearing as narrow, broken, chlorotic stripes, which may extend over the entire length of severely affected leaves. Lesion colour generally varies from white to yellow, with some virus strains causing red pigmentation on maize leaves and abnormal shoot and flower bunching in grasses. Reduced photosynthesis and increased respiration usually lead to a reduction in leaf length and plant height; thus, maize plants infected at an early stage become severely stunted, producing undersized, misshapen cobs or giving no yield at all. Yield loss in susceptible maize is directly related to the time of infection: Infected seedlings produce no yield or are killed, whereas plants infected at later times are proportionately less affected. Disease control: Disease avoidance can be practised by only planting maize during the early season when viral inoculum loads are lowest. Leafhopper vectors can also be controlled with insecticides such as carbofuran. However, the development and use of streak-resistant cultivars is probably the most effective and economically viable means of preventing streak epidemics. Naturally occurring tolerance to MSV (meaning that, although plants become systemically infected, they do not suffer serious yield losses) has been found, which has primarily been attributed to a single gene, msv-1. However, other MSV resistance genes also exist and improved resistance has been achieved by concentrating these within individual maiz genotypes. Whereas true MSV immunity (meaning that plants cannot be symptomatically infected by the virus) has been achieved in lines that include multiple small-effect resistance genes together with msv-1, it has proven difficult to transfer this immunity into commercial maize genotypes. An alternative resistance strategy using genetic engineering is currently being investigated in South Africa. Useful websites: 〈http://www.mcb.uct.ac.za/MSV/mastrevirus.htm〉; 〈http://www. danforthcenter.org/iltab/geminiviridae/geminiaccess/mastrevirus/Mastrevirus. htm〉. © 2009 Blackwell Publishing Ltd.
Resumo:
Executuve Summary Background and Aims Child abuse and neglect is a tragedy within our community, with over 10,000 substantiated reports of abuse and neglect in Queensland in the past year. The considerable consequences of child abuse and neglect are far-reaching, substantial and can be fatal. The reporting of suspicions of child abuse or neglect is often the first step in preventing further abuse or neglect. In the State of Queensland, medical practitioners are mandated by law to report their suspicions of child abuse and neglect. However, despite this mandate many still do not report their suspicions. A 1998 study indicated that 43% of medical practitioners had, at some time, made a conscious decision to not report suspected abuse or neglect (Van Haeringen, Dadds & Armstrong, 1998). The aim of this study was to gain a better understanding of beliefs about reporting suspected child abuse and neglect and the barriers to reporting suspected abuse and neglect by medical practitioners and parents and students. The findings have the potential to inform the training and education of members of the community who have a shared responsibility to protect the wellbeing of its most vulnerable members. Method In one of the largest studies of reporting behaviour in relation to suspected child abuse and neglect in Australia, we examined and compared medical practitioners’ responses with members of the community, namely parents and students. We surveyed 91 medical practitioners and 214 members of the community (102 parents and 112 students) regarding their beliefs and reporting behaviour related to suspected child abuse and neglect. We also examined reasons for not reporting suspected abuse or neglect, as well as awareness of responsibilities and the appropriate reporting procedures. To obtain such information, participants anonymously completed a comprehensive questionnaire using items from previous studies of reporting attitudes and behaviour. Executive Summary Abused Child Trust Report August 2003 5 Findings Key findings include: • The majority of medical practitioners (97%) were aware of their duty to report suspected abuse and neglect and believed they had a professional and ethical duty to do so. • A majority of parents (82%) and students (68%) also believed that they had a professional and ethical duty to report suspected abuse and neglect. • In accord with their statutory duty to report suspected abuse and neglect, 69% of medical practitioners had made a report at some point. • Sixteen percent of parents and 9% of students surveyed indicated that they had reported their suspicions of neglect and abuse. • The most endorsed belief associated with not reporting suspected child abuse and neglect was that, ‘unpleasant events would follow reporting’. • Over a quarter of medical practitioners (26%) admitted to making a decision not to report their suspicions of child abuse or neglect on at least one occasion. • Compared with previous research, there has been a decline in the number of medical practitioners who decided not to report suspected abuse or neglect from 43% (Van Haeringen et al., 1998) to 26% in the current study. • Fourteen percent of parents and 15% of students surveyed had also chosen not to report a case of suspected abuse or neglect. • Attitudes that most strongly influenced the decision to report or not report suspected abuse or neglect differed between groups (medical practitioners, parents, or students). A belief that, ‘the abuse was a single incident’ was the best predictor of non-reporting by medical practitioners, while having ‘no time to follow-up the report’ or failing to be ‘convinced of evidence of abuse’ best predicted failure to report abuse by students. A range of beliefs predicted non-reporting by parents, including the beliefs that reporting suspected abuse was ‘not their responsibility’ and ‘knowing the child had retracted their statement’. Conclusions Of major concern is that approximately 25% of medical practitioners with a mandated responsibility to report, as well as some members of the general public, revealed that they have suspected child neglect or abuse but have made the decision not to report their suspicions. Parents and students perceived the general community as having responsibility for reporting suspicions of abuse or neglect. Despite this perception, they felt that lodging a report may be overly demanding in terms of time and they had the confidence in their ability to identify child abuse and neglect. An explanation for medical practitioners deciding not to report may be based upon their optimistic belief that suspected abuse or neglect was a single incident. Our findings may best be understood from the ‘inflation of optimism’ hypothesis put forward by the Nobel Laureate, Daniel Kahneman. He suggests that in spite of rational evidence, human beings tend to make judgements based on an optimistic view rather than engaging in a rational decision-making process. In this case, despite past behaviour of abuse or neglect being the best predictor of future behaviour, medical practitioners have taken an optimistic view, choosing to believe that their suspicion of child abuse or neglect represents a single incident. The clear implication of findings in the current research is the need for the members of the general community and medical practitioners to be better appraised of the consequences of their decision-making in relation to suspicionsof child abuse and neglect. Finally findings from parents and students relating to their reporting behaviour suggest that members of the larger community represent an untapped resourcewho might, with appropriate awareness, play a more significant role in theidentification and reporting of suspected child abuse and neglect.
Resumo:
The ‘Fashion Tales’ Conference identifies three fashion discourses: that of making, that of media, and that of scholarship. We propose a fourth, which provides a foundational base for the others: the discourse of fashion pedagogy. We begin with the argument that to thrive in any of these discourses, all fashion graduates require the ability to navigate the complexities of the 21st century fashion industry. Fashion graduates emerge into a professional world which demands a range of high level capabilities above and beyond those traditionally acknowledged by the discipline. Professional education in fashion must transform itself to accommodate these imperatives. In this paper, we document a tale of fashion learning, teaching and scholarship – the tale of a highly successful future-orientated boutique university-based undergraduate fashion course in Queensland, Australia. The Discipline consistently maintains the highest student satisfaction and lowest attrition of any course in the university, achieves extremely competitive student satisfaction scores when compared with other courses nationally and internationally, and reports outstanding graduate employment outcomes. The core of the article addresses how the course effectively balances five key pedagogical tensions identified from the findings of in-depth focus groups with graduating students, and interviews with teaching staff. The pedagogical tensions are: high concept/ authenticity; high disciplinarity/ interdisciplinarity; high rigour/ play; high autonomy/ scaffolding; and high individuality/ community, where community can be further divided into high challenge and high support. We discuss each of these tensions and how they are characterised within the course, using rich descriptions given by the students. We also draw upon the wider andragogical and learning futures literatures to link the tensions with what is already known about excellence in 21st century higher and further education curriculum and pedagogic practice. We ask: as the fashion industry becomes truly globalised, virtualised, and diversified, and as initial professional training for the industry becomes increasingly massified and performatised, what are the best teaching approaches to produce autonomous, professionally capable, enterprising and responsible graduates into the future? Can the pedagogical balances described in this case study be maintained in the light of these powerful external forces, and if so, how?
Resumo:
Background Failure to convey time-critical information to team members during surgery diminishes members’ perception of the dynamic information relevant to their task, and compromises shared situational awareness. This research reports the dialog around clinical decisions made by team members in the time-pressured and high-risk context of surgery, and the impact of these communications on shared situational awareness. Methods Fieldwork methods were used to capture the dynamic integration of individual and situational elements in surgery that provided the backdrop for clinical decisions. Nineteen semi structured interviews were performed with 24 participants from anaesthesia, surgery, and nursing in the operating rooms of a large metropolitan hospital in Queensland, Australia. Thematic analysis was used. Results: The domain “coordinating decisions in surgery” was generated from textual data. Within this domain, three themes illustrated the dialog of clinical decisions, i.e., synchronizing and strategizing actions, sharing local knowledge, and planning contingency decisions based on priority. Conclusion Strategies used to convey decisions that enhanced shared situational awareness included the use of “self-talk”, closed-loop communications, and “overhearing” conversations that occurred at the operating table. Behaviours’ that compromised a team’s shared situational awareness included tunnelling and fixating on one aspect of the situation.
Resumo:
Nigam v Harm (No 2) [2011] WASCA 221, Western Australia Court of Appeal, 18 October 2011
Resumo:
The high volume and widespread use of industrial chemicals, the backlog of internationally untested chemicals, the uptake of synthetic chemicals found in babies’ in utero, cord blood, and in breast milk, and the lack of a unified and comprehensive regulatory framework, all underscore the importance of developing policies that protect the most vulnerable in our society – our children. Australia’s failure to do so raises profound intergenerational ethical issues. This paper tells a story of international policy, and where Australia is falling down. This paper highlights the need for significant policy reforms in the area of chemical regulation in Australia. We argue that we can learn much from countries already taking critical steps to reduce the toxic chemical exposure, and the development of a comprehensive, child-centered chemical regulation framework is central to turning this around.
Resumo:
Background: The growing proportion of older adults in Australia is predicted to comprise 23% of the population by 2030. Accordingly, an increasing number of older drivers and fatal crashes of these drivers could also be expected. While the cognitive and physiological limitations of ageing and their road safety implications have been widely documented, research has generally considered older drivers as a homogeneous group. Knowledge of age-related crash trends within the older driver group itself is currently limited. Objective: The aim of this research was to identify age-related differences in serious road crashes of older drivers. This was achieved by comparing crash characteristics between older and younger drivers and between sub-groups of older drivers. Particular attention was paid to serious crashes (crashes resulting in hospitalisation and fatalities) as they place the greatest burden on the Australian health system. Method: Using Queensland Crash data, a total of 191,709 crashes of all-aged drivers (17–80+) over a 9-year period were analysed. Crash patterns of drivers’ aged 17–24, 25–39, 40–49, 50–59, 60–69, 70–79 and 80+ were compared in terms of crash severity (e.g., fatal), at fault levels, traffic control measures (e.g., stop signs) and road features (e.g., intersections). Crashes of older driver sub-groups (60–69, 70–79, 80+) were also compared to those of middle-aged drivers (40–49 and 50–59 combined, who were identified as the safest driving cohort) with respect to crash-related traffic control features and other factors (e.g., speed). Confounding factors including speed and crash nature (e.g., sideswipe) were controlled for. Results and discussion: Results indicated that patterns of serious crashes, as a function of crash severity, at-fault levels, road conditions and traffic control measures, differed significantly between age groups. As a group, older drivers (60+) represented the greatest proportion of crashes resulting in fatalities and hospitalisation, as well as those involving uncontrolled intersections and failure to give way. The opposite was found for middle-aged drivers, although they had the highest proportion of alcohol and speed-related crashes when compared to older drivers. Among all older drivers, those aged 60–69 were least likely to be involved in or the cause of crashes, but most likely to crash at interchanges and as a result of driving while fatigued or after consuming alcohol. Drivers aged 70–79 represented a mid-range level of crash involvement and culpability, and were most likely to crash at stop and give way signs. Drivers aged 80 years and beyond were most likely to be seriously injured or killed in, and at-fault for, crashes, and had the greatest number of crashes at both conventional and circular intersections. Overall, our findings highlight the heterogeneity of older drivers’ crash patterns and suggest that age-related differences must be considered in measures designed to improve older driver safety.
Resumo:
Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.
Resumo:
Unless sustained, coordinated action is generated in road safety, road traffic deaths are poised to rise from approximately 1.3 to 1.9 million a year by 2020 (Krug, 2012). To generate this harmonised response, road safety management agencies are being urged to adopt multisectoral collaboration (WHO, 2009b), which is achievable through the principle of policy integration. Yet policy integration, in its current hierarchical format, is marred by a lack of universality of its interpretation, a failure to anticipate the complexities of coordinated effort, dearth of information about its design and the absence of a normative perspective to share responsibility. This paper addresses this ill-conception of policy integration by reconceptualising it through a qualitative examination of 16 road safety stakeholders’ written submissions, lodged with the Australian Transport Council in 2011. The resulting, new principle of policy integration, Participatory Deliberative Integration, provides a conceptual framework for the alignment of effort across stakeholders in transport, health, traffic law enforcement, relevant trades and the community. With the adoption of Participatory Deliberative Integration, road safety management agencies should secure the commitment of key stakeholders in the development and implementation of, amongst other policy measures, National Road Safety Strategies and Mix Mode Integrated Timetabling.