343 resultados para requirement


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the outcomes of a study which focused on evaluating roof surfaces as stormwater harvesting catchments. Build-up and wash-off samples were collected from model roof surfaces. The collected build-up samples were separated into five different particle size ranges prior to the analysis of physico-chemical parameters. Study outcomes showed that roof surfaces are efficient catchment surfaces for the deposition of fine particles which travel over long distances. Roof surfaces contribute relatively high pollutant loads to the runoff and hence significantly influence the quality of the harvested rainwater. Pollutants associated with solids build-up on roof surfaces can vary with time, even with minimal changes to total solids load and particle size distribution. It is postulated that this variability is due to changes in distant atmospheric pollutant sources and wind patterns. The study highlighted the requirement for first flush devices to divert the highly polluted initial portion of roof runoff. Furthermore, it is highly recommended to not to harvest runoff from small intensity rainfall events since there is a high possibility that the runoff would contain a significant amount of pollutants even after the initial runoff fraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge of the regulation of food intake is crucial to an understanding of body weight and obesity. Strictly speaking, we should refer to the control of food intake whose expression is modulated in the interests of the regulation of body weight. Food intake is controlled, body weight is regulated. However, this semantic distinction only serves to emphasize the importance of food intake. Traditionally food intake has been researched within the homeostatic approach to physiological systems pioneered by Claude Bernard, Walter Cannon and others; and because feeding is a form of behaviour, it forms part of what Curt Richter referred to as the behavioural regulation of body weight (or behavioural homeostasis). This approach views food intake as the vehicle for energy supply whose expression is modulated by a metabolic drive generated in response to a requirement for energy. The idea was that eating behaviour is stimulated and inhibited by internal signalling systems (for the drive and suppression of eating respectively) in order to regulate the internal environment (energy stores, tissue needs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Home Automation (HA) has emerged as a prominent ¯eld for researchers and in- vestors confronting the challenge of penetrating the average home user market with products and services emerging from technology based vision. In spite of many technology contri- butions, there is a latent demand for a®ordable and pragmatic assistive technologies for pro-active handling of complex lifestyle related problems faced by home users. This study has pioneered to develop an Initial Technology Roadmap for HA (ITRHA) that formulates a need based vision of 10-15 years, identifying market, product and technology investment opportunities, focusing on those aspects of HA contributing to e±cient management of home and personal life. The concept of Family Life Cycle is developed to understand the temporal needs of family. In order to formally describe a coherent set of family processes, their relationships, and interaction with external elements, a reference model named Fam- ily System is established that identi¯es External Entities, 7 major Family Processes, and 7 subsystems-Finance, Meals, Health, Education, Career, Housing, and Socialisation. Anal- ysis of these subsystems reveals Soft, Hard and Hybrid processes. Rectifying the lack of formal methods for eliciting future user requirements and reassessing evolving market needs, this study has developed a novel method called Requirement Elicitation of Future Users by Systems Scenario (REFUSS), integrating process modelling, and scenario technique within the framework of roadmapping. The REFUSS is used to systematically derive process au- tomation needs relating the process knowledge to future user characteristics identi¯ed from scenarios created to visualise di®erent futures with richly detailed information on lifestyle trends thus enabling learning about the future requirements. Revealing an addressable market size estimate of billions of dollars per annum this research has developed innovative ideas on software based products including Document Management Systems facilitating automated collection, easy retrieval of all documents, In- formation Management System automating information services and Ubiquitous Intelligent System empowering the highly mobile home users with ambient intelligence. Other product ideas include robotic devices of versatile Kitchen Hand and Cleaner Arm that can be time saving. Materialisation of these products require technology investment initiating further research in areas of data extraction, and information integration as well as manipulation and perception, sensor actuator system, tactile sensing, odour detection, and robotic controller. This study recommends new policies on electronic data delivery from service providers as well as new standards on XML based document structure and format.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the 1960s, the value relevance of accounting information has been an important topic in accounting research. The value relevance research provides evidence as to whether accounting numbers relate to corporate value in a predicted manner (Beaver, 2002). Such research is not only important for investors but also provides useful insights into accounting reporting effectiveness for standard setters and other users. Both the quality of accounting standards used and the effectiveness associated with implementing these standards are fundamental prerequisites for high value relevance (Hellstrom, 2006). However, while the literature comprehensively documents the value relevance of accounting information in developed markets, little attention has been given to emerging markets where the quality of accounting standards and their enforcement are questionable. Moreover, there is currently no known research that explores the association between level of compliance with International Financial Reporting Standards (IFRS) and the value relevance of accounting information. Motivated by the lack of research on the value relevance of accounting information in emerging markets and the unique institutional setting in Kuwait, this study has three objectives. First, it investigates the extent of compliance with IFRS with respect to firms listed on the Kuwait Stock Exchange (KSE). Second, it examines the value relevance of accounting information produced by KSE-listed firms over the 1995 to 2006 period. The third objective links the first two and explores the association between the level of compliance with IFRS and the value relevance of accounting information to market participants. Since it is among the first countries to adopt IFRS, Kuwait provides an ideal setting in which to explore these objectives. In addition, the Kuwaiti accounting environment provides an interesting regulatory context in which each KSE-listed firm is required to appoint at least two external auditors from separate auditing firms. Based on the research objectives, five research questions (RQs) are addressed. RQ1 and RQ2 aim to determine the extent to which KSE-listed firms comply with IFRS and factors contributing to variations in compliance levels. These factors include firm attributes (firm age, leverage, size, profitability, liquidity), the number of brand name (Big-4) auditing firms auditing a firm’s financial statements, and industry categorization. RQ3 and RQ4 address the value relevance of IFRS-based financial statements to investors. RQ5 addresses whether the level of compliance with IFRS contributes to the value relevance of accounting information provided to investors. Based on the potential improvement in value relevance from adopting and complying with IFRS, it is predicted that the higher the level of compliance with IFRS, the greater the value relevance of book values and earnings. The research design of the study consists of two parts. First, in accordance with prior disclosure research, the level of compliance with mandatory IFRS is examined using a disclosure index. Second, the value relevance of financial statement information, specifically, earnings and book value, is examined empirically using two valuation models: price and returns models. The combined empirical evidence that results from the application of both models provides comprehensive insights into value relevance of accounting information in an emerging market setting. Consistent with expectations, the results show the average level of compliance with IFRS mandatory disclosures for all KSE-listed firms in 2006 was 72.6 percent; thus, indicating KSE-listed firms generally did not fully comply with all requirements. Significant variations in the extent of compliance are observed among firms and across accounting standards. As predicted, older, highly leveraged, larger, and profitable KSE-listed firms are more likely to comply with IFRS required disclosures. Interestingly, significant differences in the level of compliance are observed across the three possible auditor combinations of two Big-4, two non-Big 4, and mixed audit firm types. The results for the price and returns models provide evidence that earnings and book values are significant factors in the valuation of KSE-listed firms during the 1995 to 2006 period. However, the results show that the value relevance of earnings and book values decreased significantly during that period, suggesting that investors rely less on financial statements, possibly due to the increase in the available non-financial statement sources. Notwithstanding this decline, a significant association is observed between the level of compliance with IFRS and the value relevance of earnings and book value to KSE investors. The findings make several important contributions. First, they raise concerns about the effectiveness of the regulatory body that oversees compliance with IFRS in Kuwait. Second, they challenge the effectiveness of the two-auditor requirement in promoting compliance with regulations as well as the associated cost-benefit of this requirement for firms. Third, they provide the first known empirical evidence linking the level of IFRS compliance with the value relevance of financial statement information. Finally, the findings are relevant for standard setters and for their current review of KSE regulations. In particular, they highlight the importance of establishing and maintaining adequate monitoring and enforcement mechanisms to ensure compliance with accounting standards. In addition, the finding that stricter compliance with IFRS improves the value relevance of accounting information highlights the importance of full compliance with IFRS and not just mere adoption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the “rich” language into the “poor” one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays o6 to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

stract This paper proposes a hybrid discontinuous control methodology for a voltage source converter (VSC), which is used in an uninterrupted power supply (UPS) application. The UPS controls the voltage at the point of common coupling (PCC). An LC filter is connected at the output of the VSC to bypass switching harmonics. With the help of both filter inductor current and filter capacitor voltage control, the voltage across the filter capacitor is controlled. Based on the voltage error, the control is switched between current and voltage control modes. In this scheme, an extra diode state is used that makes the VSC output current discontinuous. This diode state reduces the switching losses. The UPS controls the active power it supplies to a three-phase, four-wire distribution system. This gives a full flexibility to the grid to buy power from the UPS system depending on its cost and load requirement at any given time. The scheme is validated through simulation using PSCAD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Construction sector application of Lead Indicators generally and Positive Performance Indicators (PPIs) particularly, are largely seen by the sector as not providing generalizable indicators of safety effectiveness. Similarly, safety culture is often cited as an essential factor in improving safety performance, yet there is no known reliable way of measuring safety culture. This paper proposes that the accurate measurement of safety effectiveness and safety culture is a requirement for assessing safe behaviours, safety knowledge, effective communication and safety performance. Currently there are no standard national or international safety effectiveness indicators (SEIs) that are accepted by the construction industry. The challenge is that quantitative survey instruments developed for measuring safety culture and/ or safety climate are inherently flawed methodologically and do not produce reliable and representative data concerning attitudes to safety. Measures that combine quantitative and qualitative components are needed to provide a clear utility for safety effectiveness indicators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a potent agricultural greenhouse gas (GHG). More than 50% of the global anthropogenic N2O flux is attributable to emissions from soil, primarily due to large fertilizer nitrogen (N) applications to corn and other non-leguminous crops. Quantification of the trade–offs between N2O emissions, fertilizer N rate, and crop yield is an essential requirement for informing management strategies aiming to reduce the agricultural sector GHG burden, without compromising productivity and producer livelihood. There is currently great interest in developing and implementing agricultural GHG reduction offset projects for inclusion within carbon offset markets. Nitrous oxide, with a global warming potential (GWP) of 298, is a major target for these endeavours due to the high payback associated with its emission prevention. In this paper we use robust quantitative relationships between fertilizer N rate and N2O emissions, along with a recently developed approach for determining economically profitable N rates for optimized crop yield, to propose a simple, transparent, and robust N2O emission reduction protocol (NERP) for generating agricultural GHG emission reduction credits. This NERP has the advantage of providing an economic and environmental incentive for producers and other stakeholders, necessary requirements in the implementation of agricultural offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian privacy law regulates how government agencies and private sector organisations collect, store and use personal information. A coherent conceptual basis of personal information is an integral requirement of information privacy law as it determines what information is regulated. A 2004 report conducted on behalf of the UK’s Information Commissioner (the 'Booth Report') concluded that there was no coherent definition of personal information currently in operation because different data protection authorities throughout the world conceived the concept of personal information in different ways. The authors adopt the models developed by the Booth Report to examine the conceptual basis of statutory definitions of personal information in Australian privacy laws. Research findings indicate that the definition of personal information is not construed uniformly in Australian privacy laws and that different definitions rely upon different classifications of personal information. A similar situation is evident in a review of relevant case law. Despite this, the authors conclude the article by asserting that a greater jurisprudential discourse is required based on a coherent conceptual framework to ensure the consistent development of Australian privacy law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several sets of changes have been made to motorcycle licensing in Queensland since 2007, with the aim of improving the safety of novice riders. These include a requirement that a motorcycle learner licence applicant must have held a provisional or open car licence for 12 months, and imposing a 3 year limit for learner licence renewal. Additionally, a requirement to hold an RE (250 cc limited) class licence for a period of 12 months prior to progressing to an R class licence was introduced for Q-RIDE. This paper presents analyses of licensing transaction data that examine the effects of the licensing changes on the duration that the learner licence was held, factors affecting this duration and the extent to which the demographic characteristics of learner licence holders changed. The likely safety implications of the observed changes are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive summary Objective: The aims of this study were to identify the impact of Pandemic (H1N1) 2009 Influenza on Australian Emergency Departments (EDs) and their staff, and to inform planning, preparedness, and response management arrangements for future pandemics, as well as managing infectious patients presenting to EDs in everyday practice. Methods This study involved three elements: 1. The first element of the study was an examination of published material including published statistics. Standard literature research methods were used to identify relevant published articles. In addition, data about ED demand was obtained from Australian Government Department of Health and Ageing (DoHA) publications, with several state health departments providing more detailed data. 2. The second element of the study was a survey of Directors of Emergency Medicine identified with the assistance of the Australasian College for Emergency Medicine (ACEM). This survey retrieved data about demand for ED services and elicited qualitative comments on the impact of the pandemic on ED management. 3. The third element of the study was a survey of ED staff. A questionnaire was emailed to members of three professional colleges—the ACEM; the Australian College of Emergency Nursing (ACEN); and the College of Emergency Nursing Australasia (CENA). The overall response rate for the survey was 18.4%, with 618 usable responses from 3355 distributed questionnaires. Topics covered by the survey included ED conditions during the (H1N1) 2009 influenza pandemic; information received about Pandemic (H1N1) 2009 Influenza; pandemic plans; the impact of the pandemic on ED staff with respect to stress; illness prevention measures; support received from others in work role; staff and others’ illness during the pandemic; other factors causing ED staff to miss work during the pandemic; and vaccination against Pandemic (H1N1) 2009 Influenza. Both qualitative and quantitative data were collected and analysed. Results: The results obtained from Directors of Emergency Medicine quantifying the impact of the pandemic were too limited for interpretation. Data sourced from health departments and published sources demonstrated an increase in influenza-like illness (ILI) presentations of between one and a half and three times the normal level of presentations of ILIs. Directors of Emergency Medicine reported a reasonable level of preparation for the pandemic, with most reporting the use of pandemic plans that translated into relatively effective operational infection control responses. Directors reported a highly significant impact on EDs and their staff from the pandemic. Growth in demand and related ED congestion were highly significant factors causing distress within the departments. Most (64%) respondents established a ‘flu clinic’ either as part of Pandemic (H1N1) 2009 Influenza Outbreak in Australia: Impact on Emergency Departments. the ED operations or external to it. They did not note a significantly higher rate of sick leave than usual. Responses relating to the impact on staff were proportional to the size of the colleges. Most respondents felt strongly that Pandemic (H1N1) 2009 Influenza had a significant impact on demand in their ED, with most patients having low levels of clinical urgency. Most respondents felt that the pandemic had a negative impact on the care of other patients, and 94% revealed some increase in stress due to lack of space for patients, increased demand, and filling staff deficits. Levels of concern about themselves or their family members contracting the illness were less significant than expected. Nurses displayed significantly higher levels of stress overall, particularly in relation to skill-mix requirements, lack of supplies and equipment, and patient and patients’ family aggression. More than one-third of respondents became ill with an ILI. Whilst respondents themselves reported taking low levels of sick leave, respondents cited difficulties with replacing absent staff. Ranked from highest to lowest, respondents gained useful support from ED colleagues, ED administration, their hospital occupational health department, hospital administration, professional colleges, state health department, and their unions. Respondents were generally positive about the information they received overall; however, the volume of information was considered excessive and sometimes inconsistent. The media was criticised as scaremongering and sensationalist and as being the cause of many unnecessary presentations to EDs. Of concern to the investigators was that a large proportion (43%) of respondents did not know whether a pandemic plan existed for their department or hospital. A small number of staff reported being redeployed from their usual workplace for personal risk factors or operational reasons. As at the time of survey (29 October –18 December 2009), 26% of ED staff reported being vaccinated against Pandemic (H1N1) 2009 Influenza. Of those not vaccinated, half indicated they would ‘definitely’ or ‘probably’ not get vaccinated, with the main reasons being the vaccine was ‘rushed into production’, ‘not properly tested’, ‘came out too late’, or not needed due to prior infection or exposure, or due to the mildness of the disease. Conclusion: Pandemic (H1N1) 2009 Influenza had a significant impact on Australian Emergency Departments. The pandemic exposed problems in existing plans, particularly a lack of guidelines, general information overload, and confusion due to the lack of a single authoritative information source. Of concern was the high proportion of respondents who did not know if their hospital or department had a pandemic plan. Nationally, the pandemic communication strategy needs a detailed review, with more engagement with media networks to encourage responsible and consistent reporting. Also of concern was the low level of immunisation, and the low level of intention to accept vaccination. This is a problem seen in many previous studies relating to seasonal influenza and health care workers. The design of EDs needs to be addressed to better manage infectious patients. Significant workforce issues were confronted in this pandemic, including maintaining appropriate staffing levels; staff exposure to illness; access to, and appropriate use of, personal protective equipment (PPE); and the difficulties associated with working in PPE for prolonged periods. An administrative issue of note was the reporting requirement, which created considerable additional stress for staff within EDs. Peer and local support strategies helped ensure staff felt their needs were provided for, creating resilience, dependability, and stability in the ED workforce. Policies regarding the establishment of flu clinics need to be reviewed. The ability to create surge capacity within EDs by considering staffing, equipment, physical space, and stores is of primary importance for future pandemics.