380 resultados para Significant events


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The global business environment is witnessing tough times, and this situation has significant implications on how organizations manage their processes and resources. Accounting information system (AIS) plays a critical role in this situation to ensure appropriate processing of financial transactions and availability to relevant information for decision-making. We suggest the need for a dynamic AIS environment for today’s turbulent business environment. This environment is possible with a dynamic AIS, complementary business intelligence systems, and technical human capability. Data collected through a field survey suggests that the dynamic AIS environment contributes to an organization’s accounting functions of processing transactions, providing information for decision making, and ensuring an appropriate control environment. These accounting processes contribute to the firm-level performance of the organization. From these outcomes, one can infer that a dynamic AIS environment contributes to organizational performance in today’s challenging business environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality can be influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigation of four urban residential catchments and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling outcomes indicate that selecting smaller average recurrence interval (ARI) events with high intensity-short duration as the threshold for the treatment system design is the most feasible since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of rainfall events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioacoustic data can provide an important base for environmental monitoring. To explore a large amount of field recordings collected, an automated similarity search algorithm is presented in this paper. A region of an audio defined by frequency and time bounds is provided by a user; the content of the region is used to construct a query. In the retrieving process, our algorithm will automatically scan through recordings to search for similar regions. In detail, we present a feature extraction approach based on the visual content of vocalisations – in this case ridges, and develop a generic regional representation of vocalisations for indexing. Our feature extraction method works best for bird vocalisations showing ridge characteristics. The regional representation method allows the content of an arbitrary region of a continuous recording to be described in a compressed format.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At our regional University low socioeconomic status (SES) campus, enrolled nurses can enter into the second year of a Bachelor of Nursing. These students, hence, have their first year experience while entering directly into the degree’s second year. A third of these students withdrew from our Bioscience units, and left the University. In an attempt to improve student retention and success, we introduced a strategy involving (i) review lectures in each of the Bioscience disciplines, and subsequently, (ii) “Getting started”, a formative website activity of basic Bioscience concepts, (iii) an ‘O’-week workshop addressing study skills and online resources, and (iv) online tutor support. In addition to being well received, the introduction of the review lectures and full intervention was associated with a significant reduction in student attrition. This successful approach could be used in other low SES areas with accelerated programs for Nursing and may have application beyond this discipline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent road safety statistics show that the decades-long fatalities decreasing trend is stopping and stagnating. Statistics further show that crashes are mostly driven by human error, compared to other factors such as environmental conditions and mechanical defects. Within human error, the dominant error source is perceptive errors, which represent about 50% of the total. The next two sources are interpretation and evaluation, which accounts together with perception for more than 75% of human error related crashes. Those statistics show that allowing drivers to perceive and understand their environment better, or supplement them when they are clearly at fault, is a solution to a good assessment of road risk, and, as a consequence, further decreasing fatalities. To answer this problem, currently deployed driving assistance systems combine more and more information from diverse sources (sensors) to enhance the driver's perception of their environment. However, because of inherent limitations in range and field of view, these systems' perception of their environment remains largely limited to a small interest zone around a single vehicle. Such limitations can be overcomed by increasing the interest zone through a cooperative process. Cooperative Systems (CS), a specific subset of Intelligent Transportation Systems (ITS), aim at compensating for local systems' limitations by associating embedded information technology and intervehicular communication technology (IVC). With CS, information sources are not limited to a single vehicle anymore. From this distribution arises the concept of extended or augmented perception. Augmented perception allows extending an actor's perceptive horizon beyond its "natural" limits not only by fusing information from multiple in-vehicle sensors but also information obtained from remote sensors. The end result of an augmented perception and data fusion chain is known as an augmented map. It is a repository where any relevant information about objects in the environment, and the environment itself, can be stored in a layered architecture. This thesis aims at demonstrating that augmented perception has better performance than noncooperative approaches, and that it can be used to successfully identify road risk. We found it was necessary to evaluate the performance of augmented perception, in order to obtain a better knowledge on their limitations. Indeed, while many promising results have already been obtained, the feasibility of building an augmented map from exchanged local perception information and, then, using this information beneficially for road users, has not been thoroughly assessed yet. The limitations of augmented perception, and underlying technologies, have not be thoroughly assessed yet. Most notably, many questions remain unanswered as to the IVC performance and their ability to deliver appropriate quality of service to support life-saving critical systems. This is especially true as the road environment is a complex, highly variable setting where many sources of imperfections and errors exist, not only limited to IVC. We provide at first a discussion on these limitations and a performance model built to incorporate them, created from empirical data collected on test tracks. Our results are more pessimistic than existing literature, suggesting IVC limitations have been underestimated. Then, we develop a new CS-applications simulation architecture. This architecture is used to obtain new results on the safety benefits of a cooperative safety application (EEBL), and then to support further study on augmented perception. At first, we confirm earlier results in terms of crashes numbers decrease, but raise doubts on benefits in terms of crashes' severity. In the next step, we implement an augmented perception architecture tasked with creating an augmented map. Our approach is aimed at providing a generalist architecture that can use many different types of sensors to create the map, and which is not limited to any specific application. The data association problem is tackled with an MHT approach based on the Belief Theory. Then, augmented and single-vehicle perceptions are compared in a reference driving scenario for risk assessment,taking into account the IVC limitations obtained earlier; we show their impact on the augmented map's performance. Our results show that augmented perception performs better than non-cooperative approaches, allowing to almost tripling the advance warning time before a crash. IVC limitations appear to have no significant effect on the previous performance, although this might be valid only for our specific scenario. Eventually, we propose a new approach using augmented perception to identify road risk through a surrogate: near-miss events. A CS-based approach is designed and validated to detect near-miss events, and then compared to a non-cooperative approach based on vehicles equiped with local sensors only. The cooperative approach shows a significant improvement in the number of events that can be detected, especially at the higher rates of system's deployment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distraction resulting from mobile phone use whilst driving has been shown to increase the reaction times of drivers, thereby increasing the likelihood of a crash. This study compares the effects of mobile phone conversations on reaction times of drivers responding to traffic events that occur at different points in a driver’s field of view. The CARRS-Q Advanced Driving Simulator was used to test a group of young drivers on various simulated driving tasks including a traffic event that occurred within the driver’s central vision—a lead vehicle braking suddenly—and an event that occurred within the driver’s peripheral—a pedestrian entering a zebra crossing from a footpath. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), and while engaged in hands-free and handheld phone conversations. The drivers were aged between 21 to 26 years and split evenly by gender. Differences in reaction times for an event in a driver’s central vision were not statistically significant across phone conditions, probably due to a lower speed selection by the distracted drivers. In contrast, the reaction times to detect an event that originated in a distracted driver’s peripheral vision were more than 50% longer compared to the baseline condition. A further statistical analysis revealed that deterioration of reaction times to an event in the peripheral vision was greatest for distracted drivers holding a provisional licence. Many critical events originate in a driver’s periphery, including vehicles, bicyclists, and pedestrians emerging from side streets. A reduction in the ability to detect these events while distracted presents a significant safety concern that must be addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of mobile phones while driving is more prevalent among young drivers—a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q Advanced Driving Simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver’s peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21 to 26 years old and split evenly by gender. Drivers’ reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver’s age, license type (Provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Commencing selected workouts with low muscle glycogen availability augments several markers of training adaptation compared with undertaking the same sessions with normal glycogen content. However, low glycogen availability reduces the capacity to perform high-intensity (>85% of peak aerobic power (V·O2peak)) endurance exercise. We determined whether a low dose of caffeine could partially rescue the reduction in maximal self-selected power output observed when individuals commenced high-intensity interval training with low (LOW) compared with normal (NORM) glycogen availability. Methods Twelve endurance-trained cyclists/triathletes performed four experimental trials using a double-blind Latin square design. Muscle glycogen content was manipulated via exercise–diet interventions so that two experimental trials were commenced with LOW and two with NORM muscle glycogen availability. Sixty minutes before an experimental trial, subjects ingested a capsule containing anhydrous caffeine (CAFF, 3 mg-1·kg-1 body mass) or placebo (PLBO). Instantaneous power output was measured throughout high-intensity interval training (8 × 5-min bouts at maximum self-selected intensity with 1-min recovery). Results There were significant main effects for both preexercise glycogen content and caffeine ingestion on power output. LOW reduced power output by approximately 8% compared with NORM (P < 0.01), whereas caffeine increased power output by 2.8% and 3.5% for NORM and LOW, respectively, (P < 0.01). Conclusion We conclude that caffeine enhanced power output independently of muscle glycogen concentration but could not fully restore power output to levels commensurate with that when subjects commenced exercise with normal glycogen availability. However, the reported increase in power output does provide a likely performance benefit and may provide a means to further enhance the already augmented training response observed when selected sessions are commenced with reduced muscle glycogen availability. It has long been known that endurance training induces a multitude of metabolic and morphological adaptations that improve the resistance of the trained musculature to fatigue and enhance endurance capacity and/or exercise performance (13). Accumulating evidence now suggests that many of these adaptations can be modified by nutrient availability (9–11,21). Growing evidence suggests that training with reduced muscle glycogen using a “train twice every second day” compared with a more traditional “train once daily” approach can enhance the acute training response (29) and markers representative of endurance training adaptation after short-term (3–10 wk) training interventions (8,16,30). Of note is that the superior training adaptation in these previous studies was attained despite a reduction in maximal self-selected power output (16,30). The most obvious factor underlying the reduced intensity during a second training bout is the reduction in muscle glycogen availability. However, there is also the possibility that other metabolic and/or neural factors may be responsible for the power drop-off observed when two exercise bouts are performed in close proximity. Regardless of the precise mechanism(s), there remains the intriguing possibility that the magnitude of training adaptation previously reported in the face of a reduced training intensity (Hulston et al. (16) and Yeo et al.) might be further augmented, and/or other aspects of the training stimulus better preserved, if power output was not compromised. Caffeine ingestion is a possible strategy that might “rescue” the aforementioned reduction in power output that occurs when individuals commence high-intensity interval training (HIT) with low compared with normal glycogen availability. Recent evidence suggests that, at least in endurance-based events, the maximal benefits of caffeine are seen at small to moderate (2–3 mg·kg-1 body mass (BM)) doses (for reviews, see Refs. (3,24)). Accordingly, in this study, we aimed to determine the effect of a low dose of caffeine (3 mg·kg-1 BM) on maximal self-selected power output during HIT commenced with either normal (NORM) or low (LOW) muscle glycogen availability. We hypothesized that even under conditions of low glycogen availability, caffeine would increase maximal self-selected power output and thereby partially rescue the reduction in training intensity observed when individuals commence HIT with low glycogen availability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research investigates the decision making process of individuals from revealed preferences in extreme environments or life-and-death situations, from a behavioral economics perspective. The empirical analysis of revealed behavioral preferences shows how the individual decision making process can deviate from the standard self-interested or “homo economicus” model in non-standard situations. The environments examined include: elite athletes in FIFA World and Euro Cups; climbing on Everest and the Himalaya; communication during 9/11 and risk seeking after the 2011 Brisbane floods. The results reveal that the interaction of culture and environment has a significant impact on the decision process, as social behaviors and institutions are intimately intertwined, which govern the processes of human behavior and interaction. Additionally, that risk attitudes are not set and that immediate environmental factors can induce a significant shift in an individuals risk seeking behaviors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is expected to be one of the biggest global health threats in the 21st century. In response to changes in climate and associated extreme events, public health adaptation has become imperative. This thesis examined several key issues in this emerging research field. The thesis aimed to identify the climate-health (particularly temperature-health) relationships, then develop quantitative models that can be used to project future health impacts of climate change, and therefore help formulate adaptation strategies for dealing with climate-related health risks and reducing vulnerability. The research questions addressed by this thesis were: (1) What are the barriers to public health adaptation to climate change? What are the research priorities in this emerging field? (2) What models and frameworks can be used to project future temperature-related mortality under different climate change scenarios? (3) What is the actual burden of temperature-related mortality? What are the impacts of climate change on future burden of disease? and (4) Can we develop public health adaptation strategies to manage the health effects of temperature in response to climate change? Using a literature review, I discussed how public health organisations should implement and manage the process of planned adaptation. This review showed that public health adaptation can operate at two levels: building adaptive capacity and implementing adaptation actions. However, there are constraints and barriers to adaptation arising from uncertainty, cost, technologic limits, institutional arrangements, deficits of social capital, and individual perception of risks. The opportunities for planning and implementing public health adaptation are reliant on effective strategies to overcome likely barriers. I proposed that high priorities should be given to multidisciplinary research on the assessment of potential health effects of climate change, projections of future health impacts under different climate and socio-economic scenarios, identification of health cobenefits of climate change policies, and evaluation of cost-effective public health adaptation options. Heat-related mortality is the most direct and highly-significant potential climate change impact on human health. I thus conducted a systematic review of research and methods for projecting future heat-related mortality under different climate change scenarios. The review showed that climate change is likely to result in a substantial increase in heatrelated mortality. Projecting heat-related mortality requires understanding of historical temperature-mortality relationships, and consideration of future changes in climate, population and acclimatisation. Further research is needed to provide a stronger theoretical framework for mortality projections, including a better understanding of socioeconomic development, adaptation strategies, land-use patterns, air pollution and mortality displacement. Most previous studies were designed to examine temperature-related excess deaths or mortality risks. However, if most temperature-related deaths occur in the very elderly who had only a short life expectancy, then the burden of temperature on mortality would have less public health importance. To guide policy decisions and resource allocation, it is desirable to know the actual burden of temperature-related mortality. To achieve this, I used years of life lost to provide a new measure of health effects of temperature. I conducted a time-series analysis to estimate years of life lost associated with changes in season and temperature in Brisbane, Australia. I also projected the future temperaturerelated years of life lost attributable to climate change. This study showed that the association between temperature and years of life lost was U-shaped, with increased years of life lost on cold and hot days. The temperature-related years of life lost will worsen greatly if future climate change goes beyond a 2 °C increase and without any adaptation to higher temperatures. The excess mortality during prolonged extreme temperatures is often greater than the predicted using smoothed temperature-mortality association. This is because sustained period of extreme temperatures produce an extra effect beyond that predicted by daily temperatures. To better estimate the burden of extreme temperatures, I estimated their effects on years of life lost due to cardiovascular disease using data from Brisbane, Australia. The results showed that the association between daily mean temperature and years of life lost due to cardiovascular disease was U-shaped, with the lowest years of life lost at 24 °C (the 75th percentile of daily mean temperature in Brisbane), rising progressively as temperatures become hotter or colder. There were significant added effects of heat waves, but no added effects of cold spells. Finally, public health adaptation to hot weather is necessary and pressing. I discussed how to manage the health effects of temperature, especially with the context of climate change. Strategies to minimise the health effects of high temperatures and climate change can fall into two categories: reducing the heat exposure and managing the health effects of high temperatures. However, policy decisions need information on specific adaptations, together with their expected costs and benefits. Therefore, more research is needed to evaluate cost-effective adaptation options. In summary, this thesis adds to the large body of literature on the impacts of temperature and climate change on human health. It improves our understanding of the temperaturehealth relationship, and how this relationship will change as temperatures increase. Although the research is limited to one city, which restricts the generalisability of the findings, the methods and approaches developed in this thesis will be useful to other researchers studying temperature-health relationships and climate change impacts. The results may be helpful for decision-makers who develop public health adaptation strategies to minimise the health effects of extreme temperatures and climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

GO423 was initiated in 2012 as part of a community effort to ensure the vitality of the Queensland Games Sector. In common with other industrialised nations, the game industry in Australia is a reasonably significant contributor to Gross National Product (GNP). Games are played in 92% of Australian homes and the average adult player has been playing them for at least twelve years with 26% playing for more than thirty years (Brand, 2011). Like the games and interactive entertainment industries in other countries, the Australian industry has its roots in the small team model of the 1980s. So, for example, Beam Software, which was established in Melbourne in 1980, was started by two people and Krome Studios was started in 1999 by three. Both these companies grew to employing over 100 people in their heydays (considered large by Antipodean standards), not by producing their own intellectual property (IP) but by content generation for off shore parent companies. Thus our bigger companies grew on a model of service provision and tended not to generate their own IP (Darchen, 2012). There are some no-table exceptions where IP has originated locally and been ac-quired by international companies but in the case of some of the works of which we are most proud, the Australian company took on the role of “Night Elf” – a convenience due to affordances of the time zone which allowed our companies to work while the parent companies slept in a different time zone. In the post GFC climate, the strong Australian dollar and the vulnerability of such service provision means that job security is virtually non-existent with employees invariably being on short-term contracts. These issues are exacerbated by the decline of middle-ground games (those which fall between the triple-A titles and the smaller games often produced for a casual audience). The response to this state of affairs has been the change in the Australian games industry to new recognition of its identity as a wider cultural sector and the rise (or return) of an increasing number of small independent game development companies. ’In-dies’ consist of small teams, often making games for mobile and casual platforms, that depend on producing at least one if not two games a year and who often explore more radical definitions of games as designed cultural objects. The need for innovation and creativity in the Australian context is seen as a vital aspect of the current changing scene where we see the emphasis on the large studio production model give way to an emerging cultural sector model where small independent teams are engaged in shorter design and production schedules driven by digital distribution. In terms of Quality of Life (QoL) this new digital distribution brings with it the danger of 'digital isolation' - a studio can work from home and deliver from home. Community events thus become increasingly important. The GO423 Symposium is a response to these perceived needs and the event is based on the understanding that our new small creative teams depend on the local community of practice in no small way. GO423 thus offers local industry participants the opportunity to talk to each other about their work, to talk to potential new members about their work and to show off their work in a small intimate situation, encouraging both feedback and support.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the rise in the politicisation of Islam in Malaysia and links it to the othering of the Malaysian Malay. It is my argument that both were “conquering” tools of Malaysia’s “Father of Modernisation”, Mahathir Mohamad, devised to win the support of the Malay Muslim majority in Malaysia. The many awards bestowed on Mahathir obscure the fact that he was instrumental in the systematic erosion of the power and roles of state institutions, especially at the Federal government level. This includes the significant loss of the independence of the Malaysian judiciary. Whilst per capita income in Malaysia may well have increased eight times under his 22-year leadership, this paper asks why is it that the majority of the Malays remain the largest number among the poor and the more disenfranchised of ethnicities in the country? Why have Malay and Muslim women suffered such a rapid decreasing ability to access justice? This paper examines existing research on the social and political changes Malaysia has experienced with Islamisation and under Mahathir’s rule, as well as studies on Malayness, Malay nationalism and Muslim Malay identity formation. The paper elaborates the othering of a majority people, the Malays in Malaysia, and how this othering has brought forth a fast-growing political power in the name of a supremacist Islam, a puritanical Sunni and Malay Islam. Specific events in the rise and rule of Mahathir as Malaysia’s then Prime Minister are reviewed, such as the banning of The Malay Dilemma, and the split in the United Malays National Organisation (UMNO) in 1987. Also examined is the varying emphasis between Muslim and race, and how during Mahathir’s rule, that strong misogynist and patriarchal attitudes took hold in Malay Muslim consciousness, a colonising consciousness that is othering the perceived cultural and genetic “impurities” within the Malay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To evaluate the effectiveness of the 7-valent pneumococcal conjugate vaccine (PCV7) in preventing pneumonia, diagnosed radiologically according to World Health Organization (WHO) criteria, among indigenous infants in the Northern Territory of Australia. Methods We conducted a historical cohort study of consecutive indigenous birth cohorts between 1 April 1998 and 28 February 2005. Children were followed up to 18 months of age. The PCV7 programme commenced on 1 June 2001. All chest X-rays taken within 3 days of any hospitalization were assessed. The primary endpoint was a first episode of WHO-defined pneumonia requiring hospitalization. Cox proportional hazards models were used to compare disease incidence. Findings There were 526 pneumonia events among 10 600 children - an incidence of 3.3 per 1000 child-months; 183 episodes (34.8%) occurred before 5 months of age and 247 (47.0%) by 7 months. Of the children studied, 27% had received 3 doses of vaccine by 7 months of age. Hazard ratios for endpoint pneumonia were 1.01 for 1 versus 0 doses; 1.03 for 2 versus 0 doses; and 0.84 for 3 versus 0 doses. Conclusion There was limited evidence that PCV7 reduced the incidence of radiologically confirmed pneumonia among Northern Territory indigenous infants, although there was a non-significant trend towards an effect after receipt of the third dose. These findings might be explained by lack of timely vaccination and/or occurrence of disease at an early age. Additionally, the relative contribution of vaccine-type pneumococcus to severe pneumonia in a setting where multiple other pathogens are prevalent may differ with respect to other settings where vaccine efficacy has been clearly established.