904 resultados para Costing methodologies
Resumo:
Objective: To estimate the relative inpatient costs of hospital-acquired conditions. Methods: Patient level costs were estimated using computerized costing systems that log individual utilization of inpatient services and apply sophisticated cost estimates from the hospital's general ledger. Occurrence of hospital-acquired conditions was identified using an Australian ‘condition-onset' flag for diagnoses not present on admission. These were grouped to yield a comprehensive set of 144 categories of hospital-acquired conditions to summarize data coded with ICD-10. Standard linear regression techniques were used to identify the independent contribution of hospital-acquired conditions to costs, taking into account the case-mix of a sample of acute inpatients (n = 1,699,997) treated in Australian public hospitals in Victoria (2005/06) and Queensland (2006/07). Results: The most costly types of complications were post-procedure endocrine/metabolic disorders, adding AU$21,827 to the cost of an episode, followed by MRSA (AU$19,881) and enterocolitis due to Clostridium difficile (AU$19,743). Aggregate costs to the system, however, were highest for septicaemia (AU$41.4 million), complications of cardiac and vascular implants other than septicaemia (AU$28.7 million), acute lower respiratory infections, including influenza and pneumonia (AU$27.8 million) and UTI (AU$24.7 million). Hospital-acquired complications are estimated to add 17.3% to treatment costs in this sample. Conclusions: Patient safety efforts frequently focus on dramatic but rare complications with very serious patient harm. Previous studies of the costs of adverse events have provided information on ‘indicators’ of safety problems rather than the full range of hospital-acquired conditions. Adding a cost dimension to priority-setting could result in changes to the focus of patient safety programmes and research. Financial information should be combined with information on patient outcomes to allow for cost-utility evaluation of future interventions.
Resumo:
Group interaction within crowds is a common phenomenon and has great influence on pedestrian behaviour. This paper investigates the impact of passenger group dynamics using an agent-based simulation method for the outbound passenger process at airports. Unlike most passenger-flow models that treat passengers as individual agents, the proposed model additionally incorporates their group dynamics as well. The simulation compares passenger behaviour at airport processes and discretionary services under different group formations. Results from experiments (both qualitative and quantitative) show that incorporating group attributes, in particular, the interactions with fellow travellers and wavers can have significant influence on passengers activity preference as well as the performance and utilisation of services in airport terminals. The model also provides a convenient way to investigate the effectiveness of airport space design and service allocations, which can contribute to positive passenger experiences. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
Automotive interactive technologies represent an exemplar challenge for user experience (UX) designers, as the concerns for aesthetics, functionality and usability add up to the compelling issues of safety and cognitive demand. This extended abstract presents a methodology for the user-centred creation and evaluation of novel in-car applications, involving real users in realistic use settings. As a case study, we present the methodologies of an ideation workshop in a simulated environment and the evaluation of six design idea prototypes for in-vehicle head up display (HUD) applications using a semi-naturalistic drive. Both methods rely on video recordings of real traffic situations that the users are familiar with and/or experienced themselves. The extended abstract presents experiences and results from the evaluation and reflection on our methods.
Resumo:
Understanding pedestrian crash causes and contributing factors in developing countries is critically important as they account for about 55% of all traffic crashes. Not surprisingly, considerable attention in the literature has been paid to road traffic crash prediction models and methodologies in developing countries of late. Despite this interest, there are significant challenges confronting safety managers in developing countries. For example, in spite of the prominence of pedestrian crashes occurring on two-way two-lane rural roads, it has proven difficult to develop pedestrian crash prediction models due to a lack of both traffic and pedestrian exposure data. This general lack of available data has further hampered identification of pedestrian crash causes and subsequent estimation of pedestrian safety performance functions. The challenges are similar across developing nations, where little is known about the relationship between pedestrian crashes, traffic flow, and road environment variables on rural two-way roads, and where unique predictor variables may be needed to capture the unique crash risk circumstances. This paper describes pedestrian crash safety performance functions for two-way two-lane rural roads in Ethiopia as a function of traffic flow, pedestrian flows, and road geometry characteristics. In particular, random parameter negative binomial model was used to investigate pedestrian crashes. The models and their interpretations make important contributions to road crash analysis and prevention in developing countries. They also assist in the identification of the contributing factors to pedestrian crashes, with the intent to identify potential design and operational improvements.
Resumo:
Organizational and technological systems analysis and design practices such as process modeling have received much attention in recent years. However, while knowledge about related artifacts such as models, tools, or grammars has substantially matured, little is known about the actual tasks and interaction activities that are conducted as part of analysis and design acts. In particular, key role of the facilitator has not been researched extensively to date. In this paper, we propose a new conceptual framework that can be used to examine facilitation behaviors in process modeling projects. The framework distinguishes four behavioral styles in facilitation (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that a facilitator can adopt. To distinguish between the four styles, we provide a set of ten behavioral anchors that underpin facilitation behaviors. We also report on a preliminary empirical exploration of our framework through interviews with experienced analysts in six modeling cases. Our research provides a conceptual foundation for an emerging theory for describing and explaining different behaviors associated with process modeling facilitation, provides first preliminary empirical results about facilitation in modeling projects, and provides a fertile basis for examining facilitation in other conceptual modeling activities.
Resumo:
This chapter discusses the methodological aspects and empirical findings of a large-scale, funded project investigating public communication through social media in Australia. The project concentrates on Twitter, but we approach it as representative of broader current trends toward the integration of large datasets and computational methods into media and communication studies in general, and social media scholarship in particular. The research discussed in this chapter aims to empirically describe networks of affiliation and interest in the Australian Twittersphere, while reflecting on the methodological implications and imperatives of ‘big data’ in the humanities. Using custom network crawling technology, we have conducted a snowball crawl of Twitter accounts operated by Australian users to identify more than one million users and their follower/followee relationships, and have mapped their interconnections. In itself, the map provides an overview of the major clusters of densely interlinked users, largely centred on shared topics of interest (from politics through arts to sport) and/or sociodemographic factors (geographic origins, age groups). Our map of the Twittersphere is the first of its kind for the Australian part of the global Twitter network, and also provides a first independent and scholarly estimation of the size of the total Australian Twitter population. In combination with our investigation of participation patterns in specific thematic hashtags, the map also enables us to examine which areas of the underlying follower/followee network are activated in the discussion of specific current topics – allowing new insights into the extent to which particular topics and issues are of interest to specialised niches or to the Australian public more broadly. Specifically, we examine the Twittersphere footprint of dedicated political discussion, under the #auspol hashtag, and compare it with the heightened, broader interest in Australian politics during election campaigns, using #ausvotes; we explore the different patterns of Twitter activity across the map for major television events (the popular competitive cooking show #masterchef, the British #royalwedding, and the annual #stateoforigin Rugby League sporting contest); and we investigate the circulation of links to the articles published by a number of major Australian news organisations across the network. Such analysis, which combines the ‘big data’-informed map and a close reading of individual communicative phenomena, makes it possible to trace the dynamic formation and dissolution of issue publics against the backdrop of longer-term network connections, and the circulation of information across these follower/followee links. Such research sheds light on the communicative dynamics of Twitter as a space for mediated social interaction. Our work demonstrates the possibilities inherent in the current ‘computational turn’ (Berry, 2010) in the digital humanities, as well as adding to the development and critical examination of methodologies for dealing with ‘big data’ (boyd and Crawford, 2011). Out tools and methods for doing Twitter research, released under Creative Commons licences through our project Website, provide the basis for replicable and verifiable digital humanities research on the processes of public communication which take place through this important new social network.
Resumo:
Cube Jam is a project developed in response to the new and rising marketing in large-scale interactive public screens - the Cube being a premier site. Cube Jam will be a crossbreeding ‘think-ubator’ that rides on the back of the already nationally recognised Game On program and its digital communities. Via a bottom-up, non-directive approach Cube Jam will facilitate a series of design provocations within co-located Jam Studios; studios that are focused on supporting adaptation and new ideation and concept design. These Studios will seek new combinations of skills and knowledges with the intention of discovering provotypes of possibilities in both working and production methodologies and product outcomes.
Resumo:
Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.
Resumo:
Natural disasters cause widespread disruption, costing the Australian economy $6.3 billion per year, and those costs are projected to rise incrementally to $23 billion by 2050. With more frequent natural disasters with greater consequences, Australian communities need the ability to prepare and plan for them, absorb and recover from them, and adapt more successfully to their effects. Enhancing Australian resilience will allow us to better anticipate disasters and assist in planning to reduce losses, rather than just waiting for the next king hit and paying for it afterwards. Given the scale of devastation, governments have been quick to pick up the pieces when major natural disasters hit. But this approach (‘The government will give you taxpayers’ money regardless of what you did to help yourself, and we’ll help you rebuild in the same risky area.’) has created a culture of dependence. This is unsustainable and costly. In 2008, ASPI published Taking a punch: building a more resilient Australia. That report emphasised the importance of strong leadership and coordination in disaster resilience policymaking, as well as the value of volunteers and family and individual preparation, in managing the effects of major disasters. This report offers a roadmap for enhancing Australia’s disaster resilience, building on the 2011 National Strategy for Disaster Resilience. It includes a snapshot of relevant issues and current resilience efforts in Australia, outlining key challenges and opportunities. The report sets out 11 recommendations to help guide Australia towards increasing national resilience, from individuals and local communities through to state and federal agencies.
Resumo:
It is widely recognized that Dorothy Heathcote was a dynamic and radical teacher who transformed and continually reinvented drama teaching. She did this by allowing her emerging thinking and understandings to flow from, and be tested by, regular and intensive ‘practicing’ in the classroom. In this way theoretical claims were grounded and evidenced in authentic classroom practice. And yet, for all her impact, it is rare to hear the claim that Heathcote’s pedagogic breakthroughs resulted from a legitimate research methodology. Clever and charismatic teaching yes; research no. One of the world’s best teachers certainly, but not a researcher; even though every lesson was experimental and every classroom was a site for discovery. This paper investigates that conundrum firstly by acknowledging that Heathcote’s practice-led teaching approach to discovery did not map comfortably on to the established educational research traditions of the day. It argues that traditional research methodologies, with their well-established protocols and methods, could not understand or embrace a research process which does its work by creating ‘fictional realities’ of openness, allegory and uncertainty. In recent years however it can be seen that Heathcote’s practice led-teaching, so essential for advancing the field, closely aligns with what many contemporary researchers are now calling practice-led research or practice as research or, in many Nordic countries, artistic research. A form of performative research, practice-led research has not emerged from the field of education but rather from the creative arts. Seeking to develop ways of researching creative practice which is deeply sympathetic and respectful of that practice, artist-researchers have developed practice-led research “which is initiated in practice, where questions, problems, challenges are identified and formed by the needs of practice and practitioners” (Grey, 1996). This sits comfortably with Heathcote’s classroom priority of “discovering by trial, error and testing; using available materials with respect for their nature, and being guided by this appreciation of their potential” (Heathcote, 1967). The paper will conclude by testing the dynamics of Heathcote’s practice-led teaching against the six conditions of practice-led research (Haseman&Mafe, 2011), a testing which will allow for a re-interpretation and re-housing of Dorothy Heathcote’s classroom-based teaching methodology as a form of performative research in its own right.
Resumo:
Study/Objective This program of research examines the effectiveness of legal mechanisms as motivators to maximise engagement and compliance with evacuation messages. This study is based on the understanding that the presence of legislative requirements, as well as sanctions and incentives encapsulated in law, can have a positive impact in achieving compliance. Our objective is to examine whether the current Australian legal frameworks, which incorporate evacuation during disasters, are an effective structure that is properly understood by those who enforce and those who are required to comply. Background In Australia, most jurisdictions have enacted legislation that encapsulates the power to evacuate and the ability to enforce compliance, either by the use of force or imposition of penalty. However, citizens still choose to not evacuate. Methods This program of research incorporates theoretical and doctrinal methodologies for reviewing literature and legislation in the Australia context. The aim of the research is to determine whether further clarity is required to create an understanding of the powers to evacuate, as well as greater public awareness of these powers. Results & Conclusion Legislators suggest that powers of evacuation can be ineffective if they are impractical to enforce. In Australia, there may also be confusion about from which legislative instrument the power to evacuate derives, and therefore whether there is a corresponding ability to enforce compliance through the use of force or imposition of a penalty. Equally, communities may lack awareness and understanding of the powers of agencies to enforce compliance. We seek to investigate whether this is the case, and whether even if greater awareness existed, it would act as an incentive to comply.
Resumo:
The Queensland Health implementation project failure is the largest IS failure in the southern hemisphere to date, costing $1.25 billion AUD. This case highlights the importance of systematically analysing project failure. It examines the case organization details, royal commission report, auditor general report and 118 witness statements pertaining to the Queensland Health implementation project. The objective of this teaching case is (1) to illustrate the factors that contributed to Queensland Health's disastrous implementation project and (2) to understand the broader applications of this project failure on state and national legislations as well as industry sectors. The case narrative and teaching notes are appropriate for both undergraduate and postgraduate students studying IS and project management subjects.
Resumo:
Background Contemporary psychotherapy research demonstrates that whilst most clients respond positively to psychological interventions, a small, but significant proportion of clients fail to experience the expected benefits of therapy. Although methodologies exist that enable the identification of successful and unsuccessful therapy, we have a limited understanding of the processes associated with these outcomes. Aim The current study sought to examine the relationship between therapeutic outcome and therapeutic language. Methodology: The therapeutic outcomes of 42 trainee-therapists who provided psychotherapy to 173 clients were tracked with the OQ-45.2 over a 5 year period with the view of identifying the client/ trainee-therapist dyads with the best and poorest outcomes. The 6 best outcome and 6 poorest outcome client/ trainee-therapist dyads were identified in order to examine the characteristics of therapeutic conversations associated with better and poorer therapy outcomes. Therapeutic conversations were analysed with the Narrative Process Coding System. Findings The best outcome client/ trainee-therapist dyads demonstrated significant increases in reflexive conversation over the course of psychotherapy. Implications Examining the practices of the best and poorest outcome client/ trainee-therapist dyads with objective measures of therapy outcome provides an important first step in understanding how therapeutic language may contribute to the greatest therapeutic improvement or deterioration.
Resumo:
In this paper, we propose a highly reliable fault diagnosis scheme for incipient low-speed rolling element bearing failures. The scheme consists of fault feature calculation, discriminative fault feature analysis, and fault classification. The proposed approach first computes wavelet-based fault features, including the respective relative wavelet packet node energy and entropy, by applying a wavelet packet transform to an incoming acoustic emission signal. The most discriminative fault features are then filtered from the originally produced feature vector by using discriminative fault feature analysis based on a binary bat algorithm (BBA). Finally, the proposed approach employs one-against-all multiclass support vector machines to identify multiple low-speed rolling element bearing defects. This study compares the proposed BBA-based dimensionality reduction scheme with four other dimensionality reduction methodologies in terms of classification performance. Experimental results show that the proposed methodology is superior to other dimensionality reduction approaches, yielding an average classification accuracy of 94.9%, 95.8%, and 98.4% under bearing rotational speeds at 20 revolutions-per-minute (RPM), 80 RPM, and 140 RPM, respectively.
Resumo:
The workshop is an activity of the IMIA Working Group ‘Security in Health Information Systems’ (SiHIS). It is focused to the growing global problem: how to protect personal health data in today’s global eHealth and digital health environment. It will review available trust building mechanisms, security measures and privacy policies. Technology alone does not solve this complex problem and current protection policies and legislation are considered woefully inadequate. Among other trust building tools, certification and accreditation mechanisms are dis-cussed in detail and the workshop will determine their acceptance and quality. The need for further research and international collective action are discussed. This workshop provides an opportunity to address a critical growing problem and make pragmatic proposals for sustainable and effective solutions for global eHealth and digital health.