737 resultados para neglect
Resumo:
Increasing resistance of rabbits to myxomatosis in Australia has led to the exploration of Rabbit Haemorrhagic Disease, also called Rabbit Calicivirus Disease (RCD) as a possible control agent. While the initial spread of RCD in Australia resulted in widespread rabbit mortality in affected areas, the possible population dynamic effects of RCD and myxomatosis operating within the same system have not been properly explored. Here we present early mathematical modelling examining the interaction between the two diseases. In this study we use a deterministic compartment model, based on the classical SIR model in infectious disease modelling. We consider, here, only a single strain of myxomatosis and RCD and neglect latent periods. We also include logistic population growth, with the inclusion of seasonal birth rates. We assume there is no cross-immunity due to either disease. The mathematical model allows for the possibility of both diseases to be simultaneously present in an individual, although results are also presented for the case where co infection is not possible, since co-infection is thought to be rare and questions exist as to whether it can occur. The simulation results of this investigation show that it is a crucial issue and should be part of future field studies. A single simultaneous outbreak of RCD and myxomatosis was simulated, while ignoring natural births and deaths, appropriate for a short timescale of 20 days. Simultaneous outbreaks may be more common in Queensland. For the case where co-infection is not possible we find that the simultaneous presence of myxomatosis in the population suppresses the prevalence of RCD, compared to an outbreak of RCD with no outbreak of myxomatosis, and thus leads to a less effective control of the population. The reason for this is that infection with myxomatosis removes potentially susceptible rabbits from the possibility of infection with RCD (like a vaccination effect). We found that the reduction in the maximum prevalence of RCD was approximately 30% for an initial prevalence of 20% of myxomatosis, for the case where there was no simultaneous outbreak of myxomatosis, but the peak prevalence was only 15% when there was a simultaneous outbreak of myxomatosis. However, this maximum reduction will depend on other parameter values chosen. When co-infection is allowed then this suppression effect does occur but to a lesser degree. This is because the rabbits infected with both diseases reduces the prevalence of myxomatosis. We also simulated multiple outbreaks over a longer timescale of 10 years, including natural population growth rates, with seasonal birth rates and density dependent(logistic) death rates. This shows how both diseases interact with each other and with population growth. Here we obtain sustained outbreaks occurring approximately every two years for the case of a simultaneous outbreak of both diseases but without simultaneous co-infection, with the prevalence varying from 0.1 to 0.5. Without myxomatosis present then the simulation predicts RCD dies out quickly without further introduction from elsewhere. With the possibility of simultaneous co-infection of rabbits, sustained outbreaks are possible but then the outbreaks are less severe and more frequent (approximately yearly). While further model development is needed, our work to date suggests that: 1) the diseases are likely to interact via their impacts on rabbit abundance levels, and 2) introduction of RCD can suppress myxomatosis prevalence. We recommend that further modelling in conjunction with field studies be carried out to further investigate how these two diseases interact in the population.
Resumo:
There are emerging movements in several countries to improve policy and practice to protect children from exposure to domestic violence. These movements have resulted in the collection of new data on EDV and the design and implementation of new child welfare policies and practices. To assist with the development of child welfare practice, this article summarizes current knowledge on the prevalence of EDV, and on child welfare services policies and practices that may hold promise for reducing the frequency and impact of EDV on children. We focus on Australia, Canada, and the United States, as these countries share a similar socio-legal context, a long history of enacting and expanding legislation about reporting of maltreatment, debates regarding the application of reporting laws to EDV, and new child welfare practices that show promise for responding more effectively to EDV.
Resumo:
-International recognition of need for public health response to child maltreatment -Need for early intervention at health system level -Important role of health professionals in identifying, reporting, documenting suspician of maltreatment -Up to 10% of all children presenting at ED’s are victims and without identification, 35% reinjured and 5% die -In Qld, mandatory reporting requirement for doctors and nurses for suspected abuse or neglect
Resumo:
The public apology to the Forgotten Australians in late 2009 was, for many, the culmination of a long campaign for recognition and justice. The groundswell for this apology was built through a series of submissions which documented the systemic institutionalised abuse and neglect experienced by the Forgotten Australians that has resulted, for some, in life-long disadvantage and marginalisation. Interestingly it seems that rather than the official documents being the catalyst for change and prompting this public apology, it was more often the personal stories of the Forgotten Australians that resonated and over time drew out quite a torrent of support from the public leading up to, during and after the public apology, just as had been the case with the ‘Stolen Generation.’ Research suggests (cite) that the ethics of such national apologies only make sense if their personal stories are seen as a collective responsibility of society, and only carry weight if we understand and seek to Nationally address the trauma experienced by such victims. In the case of the Forgotten Australians, the National Library of Australia’s Forgotten Australians and Former Child Migrants Oral History Project and the National Museum’s Inside project demonstrate commitment to the digitisation of the Forgotten Australians’ stories in order to promote a better public understanding of their experiences, and institutionally (and therefore formally) value them with renewed social importance. Our project builds on this work not by making or collecting more stories, but by examining the role of the internet and digital technologies used in the production and dissemination of individuals’ stories that have already been created during the period of time between the tabling of the senate inquiry, Children in Institutional Care (1999 or 2003?) and a formal National apology being delivered in Federal Parliament by PM Kevin Rudd (9 Nov, 2009?). This timeframe also represents the emergent first decade of Internet use by Australians, including the rapid easily accessible digital technologies and social media tools that were at our disposal, along with the promises the technology claimed to offer — that is that more people would benefit from the social connections these technologies allegedly were giving us.
Resumo:
After decades of neglect, a growing number of scholars have turned their attention to issues of crime and criminal justice in the rural context. Despite this improvement, rural crime research is underdeveloped theoretically, and is little informed by critical criminological perspectives. In this article, we introduce the broad tenets of a multi-level theory that links social and economic change to the reinforcement of rural patriarchy and male peer support, and in turn, how they are linked to separation/divorce sexual assault. We begin by addressing a series of misconceptions about what is rural, rural homogeneity and commonly held presumptions about the relationship of rurality, collective efficacy (and related concepts) and crime. We conclude by recommending more focused research, both qualitative and quantitative, to uncover specific link between the rural transformation and violence against women.
Resumo:
This chapter attends to the legal and political geographies of one of Earth's most important, valuable, and pressured spaces: the geostationary orbit. Since the first, NASA, satellite entered it in 1964, this small, defined band of Outer Space, 35,786km from the Earth's surface, and only 30km wide, has become a highly charged legal and geopolitical environment, yet it remains a space which is curiously unheard of outside of specialist circles. For the thousands of satellites which now underpin the Earth's communication, media, and data industries and flows, the geostationary orbit is the prime position in Space. The geostationary orbit only has the physical capacity to hold approximately 1500 satellites; in 1997 there were approximately 1000. It is no overstatement to assert that media, communication, and data industries would not be what they are today if it was not for the geostationary orbit. This chapter provides a critical legal geography of the geostationary orbit, charting the topography of the debates and struggles to define and manage this highly-important space. Drawing on key legal documents such as the Outer Space Treaty and the Moon Treaty, the chapter addresses fundamental questions about the legal geography of the orbit, questions which are of growing importance as the orbit’s available satellite spaces diminish and the orbit comes under increasing pressure. Who owns the geostationary orbit? Who, and whose rules, govern what may or may not (literally) take place within it? Who decides which satellites can occupy the orbit? Is the geostationary orbit the sovereign property of the equatorial states it supertends, as these states argued in the 1970s? Or is it a part of the res communis, or common property of humanity, which currently legally characterises Outer Space? As challenges to the existing legal spatiality of the orbit from launch states, companies, and potential launch states, it is particularly critical that the current spatiality of the orbit is understood and considered. One of the busiest areas of Outer Space’s spatiality is international territorial law. Mentions of Space law tend to evoke incredulity and ‘little green men’ jokes, but as Space becomes busier and busier, international Space law is growing in complexity and importance. The chapter draws on two key fields of research: cultural geography, and critical legal geography. The chapter is framed by the cultural geographical concept of ‘spatiality’, a term which signals the multiple and dynamic nature of geographical space. As spatial theorists such as Henri Lefebvre assert, a space is never simply physical; rather, any space is always a jostling composite of material, imagined, and practiced geographies (Lefebvre 1991). The ways in which a culture perceives, represents, and legislates that space are as constitutive of its identity--its spatiality--as the physical topography of the ground itself. The second field in which this chapter is situated—critical legal geography—derives from cultural geography’s focus on the cultural construction of spatiality. In his Law, Space and the Geographies of Power (1994), Nicholas Blomley asserts that analyses of territorial law largely neglect the spatial dimension of their investigations; rather than seeing the law as a force that produces specific kinds of spaces, they tend to position space as a neutral, universally-legible entity which is neatly governed by the equally neutral 'external variable' of territorial law (28). 'In the hegemonic conception of the law,' Pue similarly argues, 'the entire world is transmuted into one vast isotropic surface' (1990: 568) on which law simply acts. But as the emerging field of critical legal geography demonstrates, law is not a neutral organiser of space, but is instead a powerful cultural technology of spatial production. Or as Delaney states, legal debates are “episodes in the social production of space” (2001, p. 494). International territorial law, in other words, makes space, and does not simply govern it. Drawing on these tenets of the field of critical legal geography, as well as on Lefebvrian concept of multipartite spatiality, this chapter does two things. First, it extends the field of critical legal geography into Space, a domain with which the field has yet to substantially engage. Second, it demonstrates that the legal spatiality of the geostationary orbit is both complex and contested, and argues that it is crucial that we understand this dynamic legal space on which the Earth’s communications systems rely.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
In 1962, Dr C. Henry Kempe and his colleagues published the single most important article written to date about child maltreatment: The Battered-Child Syndrome. This chapter analyses the threefold nature of what these authors achieved: clearly identifing the medical evidence of severe child physical abuse and naming it as a syndrome; identifying the medical profession's resistance to its identification; and then translating their scholarship into advocacy for social and legal change. The chapter also traces some of the effects of Kempe's work, including the nature and effect of the subsequent introduction of mandatory reporting laws in the USA and internationally.
Resumo:
Road traffic crashes have emerged as a major health problem around the world. Road crash fatalities and injuries have been reduced significantly in developed countries, but they are still an issue in low and middle-income countries. The World Health Organization (WHO, 2009) estimates that the death toll from road crashes in low- and middle-income nations is more than 1 million people per year, or about 90% of the global road toll, even though these countries only account for 48% of the world's vehicles. Furthermore, it is estimated that approximately 265,000 people die every year in road crashes in South Asian countries and Pakistan stands out with 41,494 approximately deaths per year. Pakistan has the highest rate of fatalities per 100,000 population in the region and its road crash fatality rate of 25.3 per 100,000 population is more than three times that of Australia's. High numbers of road crashes not only cause pain and suffering to the population at large, but are also a serious drain on the country's economy, which Pakistan can ill-afford. Most studies identify human factors as the main set of contributing factors to road crashes, well ahead of road environment and vehicle factors. In developing countries especially, attention and resources are required in order to improve things such as vehicle roadworthiness and poor road infrastructure. However, attention to human factors is also critical. Human factors which contribute to crashes include high risk behaviours like speeding and drink driving, and neglect of protective behaviours such as helmet wearing and seat belt wearing. Much research has been devoted to the attitudes, beliefs and perceptions which contribute to these behaviours and omissions, in order to develop interventions aimed at increasing safer road use behaviours and thereby reducing crashes. However, less progress has been made in addressing human factors contributing to crashes in developing countries as compared to the many improvements in road environments and vehicle standards, and this is especially true of fatalistic beliefs and behaviours. This is a significant omission, since in different cultures in developing countries there are strong worldviews in which predestination persists as a central idea, i.e. that one's life (and death) and other events have been mapped out and are predetermined. Fatalism refers to a particular way in which people regard the events that occur in their lives, usually expressed as a belief that an individual does not have personal control over circumstances and that their lives are determined through a divine or powerful external agency (Hazen & Ehiri, 2006). These views are at odds with the dominant themes of modern health promotion movements, and present significant challenges for health advocates who aim to avert road crashes and diminish their consequences. The limited literature on fatalism reveals that it is not a simple concept, with religion, culture, superstition, experience, education and degree of perceived control of one's life all being implicated in accounts of fatalism. One distinction in the literature that seems promising is the distinction between empirical and theological fatalism, although there are areas of uncertainty about how well-defined the distinction between these types of fatalism is. Research into road safety in Pakistan is scarce, as is the case for other South Asian countries. From the review of the literature conducted, it is clear that the descriptions given of the different belief systems in developing countries including Pakistan are not entirely helpful for health promotion purposes and that further research is warranted on the influence of fatalism, superstition and other related beliefs in road safety. Based on the information available, a conceptual framework is developed as a means of structuring and focusing the research and analysis. The framework is focused on the influence of fatalism, superstition, religion and culture on beliefs about crashes and road user behaviour. Accordingly, this research aims to provide an understanding of the operation of fatalism and related beliefs in Pakistan to assist in the development and implementation of effective and culturally appropriate interventions. The research examines the influence of fatalism, superstition, religious and cultural beliefs on risky road use in Pakistan and is guided by three research questions: 1. What are the perceptions of road crash causation in Pakistan, in particular the role of fatalism, superstition, religious and cultural beliefs? 2. How does fatalism, superstition, and religious and cultural beliefs influence road user behaviour in Pakistan? 3. Do fatalism, superstition, and religious and cultural beliefs work as obstacles to road safety interventions in Pakistan? To address these questions, a qualitative research methodology was developed. The research focused on gathering data through individual in-depth interviewing using a semi-structured interview format. A sample of 30 participants was interviewed in Pakistan in the cities of Lahore, Rawalpindi and Islamabad. The participants included policy makers (with responsibility for traffic law), experienced police officers, religious orators, professional drivers (truck, bus and taxi) and general drivers selected through a combination of purposive, criterion and snowball sampling. The transcripts were translated from Urdu and analysed using a thematic analysis approach guided by the conceptual framework. The findings were divided into four areas: attribution of crash causation to fatalism; attribution of road crashes to beliefs about superstition and malicious acts; beliefs about road crash causation linked to popular concepts of religion; and implications for behaviour, safety and enforcement. Fatalism was almost universally evident, and expressed in a number of ways. Fate was used to rationalise fatal crashes using the argument that the people killed were destined to die that day, one way or another. Related to this was the sense of either not being fully in control of the vehicle, or not needing to take safety precautions, because crashes were predestined anyway. A variety of superstitious-based crash attributions and coping methods to deal with road crashes were also found, such as belief in the role of the evil eye in contributing to road crashes and the use of black magic by rivals or enemies as a crash cause. There were also beliefs related to popular conceptions of religion, such as the role of crashes as a test of life or a source of martyrdom. However, superstitions did not appear to be an alternative to religious beliefs. Fate appeared as the 'default attribution' for a crash when all other explanations failed to account for the incident. This pervasive belief was utilised to justify risky road use behaviour and to resist messages about preventive measures. There was a strong religious underpinning to the statement of fatalistic beliefs (this reflects popular conceptions of Islam rather than scholarly interpretations), but also an overlap with superstitious and other culturally and religious-based beliefs which have longer-standing roots in Pakistani culture. A particular issue which is explored in more detail is the way in which these beliefs and their interpretation within Pakistani society contributed to poor police reporting of crashes. The pervasive nature of fatalistic beliefs in Pakistan affects road user behaviour by supporting continued risk taking behaviour on the road, and by interfering with public health messages about behaviours which would reduce the risk of traffic crashes. The widespread influence of these beliefs on the ways that people respond to traffic crashes and the death of family members contribute to low crash reporting rates and to a system which appears difficult to change. Fate also appeared to be a major contributing factor to non-reporting of road crashes. There also appeared to be a relationship between police enforcement and (lack of) awareness of road rules. It also appears likely that beliefs can influence police work, especially in the case of road crash investigation and the development of strategies. It is anticipated that the findings could be used as a blueprint for the design of interventions aimed at influencing broad-spectrum health attitudes and practices among the communities where fatalism is prevalent. The findings have also identified aspects of beliefs that have complex social implications when designing and piloting driver intervention strategies. By understanding attitudes and behaviours related to fatalism, superstition and other related concepts, it should be possible to improve the education of general road users, such that they are less likely to attribute road crashes to chance, fate, or superstition. This study also underscores the understanding of this issue in high echelons of society (e.g., policy makers, senior police officers) as their role is vital in dispelling road users' misconceptions about the risks of road crashes. The promotion of an evidence or scientifically-based approach to road user behaviour and road safety is recommended, along with improved professional education for police and policy makers.
Resumo:
The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.
Resumo:
The Teacher Reporting Attitude Scale (TRAS) is a newly developed tool to assess teachers’ attitudes toward reporting child abuse and neglect. This article reports on an investigation of the factor structure and psychometric properties of the short form Malay version of the TRAS. A self-report cross-sectional survey was conducted with 667 teachers in 14 randomly selected schools in Selangor state, Malaysia. Analyses were conducted in a 3-stage process using both confirmatory (stages 1 and 3) and exploratory factor analyses (stage 2) to test, modify, and confirm the underlying factor structure of the TRAS in a non-Western teacher sample. Confirmatory factor analysis did not support a 3-factor model previously reported in the original TRAS study. Exploratory factor analysis revealed an 8-item, 4-factor structure. Further confirmatory factor analysis demonstrated appropriateness of the 4-factor structure. Reliability estimates for the four factors—commitment, value, concern, and confidence—were moderate. The modified short form TRAS (Malay version) has potential to be used as a simple tool for relatively quick assessment of teachers’ attitudes toward reporting child abuse and neglect. Cross-cultural differences in attitudes toward reporting may exist and the transferability of newly developed instruments to other populations should be evaluated.
Resumo:
Our contemporary public sphere has seen the 'emergence of new political rituals, which are concerned with the stains of the past, with self disclosure, and with ways of remembering once taboo and traumatic events' (Misztal, 2005). A recent case of this phenomenon occurred in Australia in 2009 with the apology to the 'Forgotten Australians': a group who suffered abuse and neglect after being removed from their parents – either in Australia or in the UK - and placed in Church and State run institutions in Australia between 1930 and 1970. This campaign for recognition by a profoundly marginalized group coincides with the decade in which the opportunities of Web 2.0 were seen to be diffusing throughout different social groups, and were considered a tool for social inclusion. This paper examines the case of the Forgotten Australians as an opportunity to investigate the role of the internet in cultural trauma and public apology. As such, it adds to recent scholarship on the role of digital web based technologies in commemoration and memorials (Arthur, 2009; Haskins, 2007; Cohen and Willis, 2004), and on digital storytelling in the context of trauma (Klaebe, 2011) by locating their role in a broader and emerging domain of social responsibility and political action (Alexander, 2004).
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.