767 resultados para Cost modelling
Resumo:
Aims To provide the best available evidence to determine the impact of nurse practitioner services on cost, quality of care, satisfaction and waiting times in the emergency department for adult patients. Background The delivery of quality care in the emergency department is one of the most important service indicators in health delivery. Increasing service pressures in the emergency department have resulted in the adoption of service innovation models: the most common and rapidly expanding of these is emergency nurse practitioner services. The rapid uptake of emergency nurse practitioner service in Australia has outpaced the capacity to evaluate this service model in terms of outcomes related to safety and quality of patient care. Previous research is now outdated and not commensurate with the changing domain of delivering emergency care with nurse practitioner services. Data A comprehensive search of four electronic databases from 2006-‐2013 was conducted to identify research evaluating nurse practitioner service impact in the emergency department. English language articles were sought using MEDLINE, CINAHL, Embase and Cochrane and included two previous systematic reviews completed five and seven years ago. Methods A three step approach was used. Following a comprehensive search, two reviewers assessed identified studies against the inclusion criteria. From the original 1013 studies, 14 papers were retained for critical appraisal on methodological quality by two independent reviewers and data extracted using standardised tools. Results Narrative synthesis was conducted to summarise and report the findings as insufficient data was available for meta-‐analysis of results. This systematic review has shown that emergency nurse practitioner service has a positive impact on quality of care, patient satisfaction and waiting times. There was insufficient evidence to draw conclusions regarding impact on costs. Conclusion Synthesis of the available research attempts to provide an evidence base for emergency nurse practitioner service to guide healthcare leaders, policy makers and clinicians in reforming emergency department service provision. The findings suggest that further quality research is required for comparative measures of clinical and service effectiveness of emergency nurse practitioner service. In the context of increased health service demand and the need to provide timely and effective care to patients, such measures will assist in delivering quality patient care.
Resumo:
Purpose This study explores recent claims that humans exhibit a minimum cost of transport (CoTmin) for running which occurs at an intermediate speed, and assesses individual physiological, gait and training characteristics. Methods Twelve healthy participants with varying levels of fitness and running experience ran on a treadmill at six self-selected speeds in a discontinuous protocol over three sessions. Running speed (km[middle dot]hr-1), V[spacing dot above]O2 (mL[middle dot]kg-1[middle dot]km-1), CoT (kcal[middle dot]km-1), heart rate (beats[middle dot]min-1) and cadence (steps[middle dot]min-1) were continuously measured. V[spacing dot above]O2 max was measured on a fourth testing session. The occurrence of a CoTmin was investigated and its presence or absence examined with respect to fitness, gait and training characteristics. Results Five participants showed a clear CoTmin at an intermediate speed and a statistically significant (p < 0.05) quadratic CoT-speed function, while the other participants did not show such evidence. Participants were then categorized and compared with respect to the strength of evidence for a CoTmin (ClearCoTmin and NoCoTmin). The ClearCoTmin group displayed significantly higher correlation between speed and cadence; more endurance training and exercise sessions per week; than the NoCoTmin group; and a marginally non-significant but higher aerobic capacity. Some runners still showed a CoTmin at an intermediate speed even after subtraction of resting energy expenditure. Conclusion The findings confirm the existence of an optimal speed for human running, in some but not all participants. Those exhibiting a COTmin undertook a higher volume of running, ran with a cadence that was more consistently modulated with speed, and tended to be aerobically fitter. The ability to minimise the energetic cost of transport appears not to be ubiquitous feature of human running but may emerge in some individuals with extensive running experience.
Resumo:
This paper presents a comprehensive numerical procedure to treat the blast response of laminated glass (LG) panels and studies the influence of important material parameters. Post-crack behaviour of the LG panel and the contribution of the interlayer towards blast resistance are treated. Modelling techniques are validated by comparing with existing experimental results. Findings indicate that the tensile strength of glass considerably influences the blast response of LG panels while the interlayer material properties have a major impact on the response under higher blast loads. Initially, glass panes absorb most of the blast energy, but after the glass breaks, interlayer deforms further and absorbs most of the blast energy. LG panels should be designed to fail by tearing of the interlayer rather than failure at the supports to achieve a desired level of protection. From this aspect, material properties of glass, interlayer and sealant joints play important roles, but unfortunately they are not accounted for in the current design standards. The new information generated in this paper will enhance the capabilities of engineers to better design LG panels under blast loads and use better materials to improve the blast response of LG panels.
Resumo:
The control of the generation and assembly of the electronegative plasma-grown particles is discussed. Due to the large number of elementary processes of particle creation and loss, electronegative complex plasmas should be treated as open systems where the stationary states are sustained by various particle creation and loss processes in the plasma bulk, on the walls, and on the dust grain surfaces. To be physically self-consistent, ionization, diffusion, electron attachment, recombination, dust charge variation, and dissipation due to electron and ion elastic collisions with neutrals and fine particles, as well as charging collisions with the dust, must be accounted for.
Aligning off-balance sheet risk, on-balance sheet risk and audit fees: a PLS path modelling analysis
Resumo:
This study focuses on using the partial least squares (PLS) path modelling technique in archival auditing research by replicating the data and research questions from prior bank audit fee studies. PLS path modelling allows for inter-correlations among audit fee determinants by establishing latent constructs and multiple relationship paths in one simultaneous PLS path model. Endogeneity concerns about auditor choice can also be addressed with PLS path modelling. With a sample of US bank holding companies for the period 2003-2009, we examine the associations among on-balance sheet financial risks, off-balance sheet risks and audit fees, and also address the pervasive client size effect, and the effect of the self-selection of auditors. The results endorse the dominating effect of size on audit fees, both directly and indirectly via its impacts on other audit fee determinants. By simultaneously considering the self-selection of auditors, we still find audit fee premiums on Big N auditors, which is the second important factor on audit fee determination. On-balance-sheet financial risk measures in terms of capital adequacy, loan composition, earnings and asset quality performance have positive impacts on audit fees. After allowing for the positive influence of on-balance sheet financial risks and entity size on off-balance sheet risk, the off-balance sheet risk measure, SECRISK, is still positively associated with bank audit fees, both before and after the onset of the financial crisis. The consistent results from this study compared with prior literature provide supporting evidence and enhance confidence on the application of this new research technique in archival accounting studies.
Aligning off-balance sheet risk, on-balance sheet risk and audit fees: a PLS path modelling analysis
Resumo:
This study focuses on using the partial least squares (PLS) path modelling methodology in archival auditing research by replicating the data and research questions from prior bank audit fee studies. PLS path modelling allows for inter-correlations among audit fee determinants by establishing latent constructs and multiple relationship paths in one simultaneous PLS path model. Endogeneity concerns about auditor choice can also be addressed with PLS path modelling. With a sample of US bank holding companies for the period 2003-2009, we examine the associations among on-balance sheet financial risks, off-balance sheet risks and audit fees, and also address the pervasive client size effect, and the effect of the self-selection of auditors. The results endorse the dominating effect of size on audit fees, both directly and indirectly via its impacts on other audit fee determinants. By simultaneously considering the self-selection of auditors, we still find audit fee premiums on Big N auditors, which is the second important factor on audit fee determination. On-balance-sheet financial risk measures in terms of capital adequacy, loan composition, earnings and asset quality performance have positive impacts on audit fees. After allowing for the positive influence of on-balance sheet financial risks and entity size on off-balance sheet risk, the off-balance sheet risk measure, SECRISK, is still positively associated with bank audit fees, both before and after the onset of the financial crisis. The consistent results from this study compared with prior literature provide supporting evidence and enhance confidence on the application of this new research technique in archival accounting studies.
Resumo:
The US National Institute of Standards and Technology (NIST) showed that, in 2004, owners and operations managers bore two thirds of the total industry cost burden from inadequate interoperability in construction projects from inception to operation, amounting to USD10.6 billion. Building Information Modelling (BIM) and similar tools were identified by Engineers Australia in 2005 as potential instruments to significantly reduce this sum, which in Australia could amount to total industry-wide cost burden of AUD12 billion. Public sector road authorities in Australia have a key responsibility in driving initiatives to reduce greenhouse gas emissions from the construction and operations of transport infrastructure. However, as previous research has shown the Environmental Impact Assessment process, typically used for project approvals and permitting based on project designs available at the consent stage, lacks Key Performance Indicators (KPIs) that include long-term impact factors and transfer of information throughout the project life cycle. In the building construction industry, BIM is widely used to model sustainability KPIs such as energy consumption, and integrated with facility management systems. This paper proposes that a similar use of BIM in early design phases of transport infrastructure could provide: (i) productivity gains through improved interoperability and documentation; (ii) the opportunity to carry out detailed cost-benefit analyses leading to significant operational cost savings; (iii) coordinated planning of street and highway lighting with other energy and environmental considerations; iv) measurable KPIs that include long-term impact factors which are transferable throughout the project life cycle; and (v) the opportunity for integrating design documentation with sustainability whole-of-life targets.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This research study investigates a novel way to identify potential efficiency gains in business operations by observing how they are carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how they can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A genetic algorithm-based approach is proposed to explore and assess alternative process execution scenarios, where the objective function is represented by a comprehensive cost structure that captures different process dimensions. Experiments conducted with different variants of the genetic algorithm evaluate the approach's feasibility. The findings demonstrate that a genetic algorithm-based approach is able to make use of cost reduction as a way to identify improved execution scenarios in terms of reduced case durations and increased resource utilisation. The ultimate aim is to utilise cost-related insights gained from such improved scenarios to put forward recommendations for reducing process-related cost within organisations.
Resumo:
We propose and evaluate a novel methodology to identify the rolling shutter parameters of a real camera. We also present a model for the geometric distortion introduced when a moving camera with a rolling shutter views a scene. Unlike previous work this model allows for arbitrary camera motion, including accelerations, is exact rather than a linearization and allows for arbitrary camera projection models, for example fisheye or panoramic. We show the significance of the errors introduced by a rolling shutter for typical robot vision problems such as structure from motion, visual odometry and pose estimation.
Resumo:
In the coming decades, the mining industry faces the dual challenge of lowering both its water and energy use. This presents a difficulty since technological advances that decrease the use of one can increase the use of the other. Historically, energy and water use have been modelled independently, making it difficult to evaluate the true costs and benefits from water and energy improvements. This paper presents a hierarchical systems model that is able to represent interconnected water and energy use at a whole of site scale. In order to explore the links between water and energy four technologies advancements have been modelled: use of dust suppression additives, the adoption of thickened tailings, the transition to dry processing and the incorporation of a treatment plant. The results show a synergy between decreased water and energy use for dust suppression additives, but a trade-off for the others.
Resumo:
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an ‘integrated database’ is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts.
Resumo:
In the Australian sugar industry, sugar cane is smashed into a straw like material by hammers before being squeezed between large rollers to extract the sugar juice. The straw like material is initially called prepared cane and then bagasse as it passes through successive roller milling units. The sugar cane materials are highly compressible, have high moisture content, are fibrous, and they resemble some peat soils in both appearance and mechanical behaviour. A promising avenue to improve the performance of milling units for increased throughput and juice extraction, and to reduce costs is by modelling of the crushing process. To achieve this, it is believed necessary that milling models should be able to reproduce measured bagasse behaviour. This investigation sought to measure the mechanical (compression, shear, and volume) behaviour of prepared cane and bagasse, to identify limitations in currently used material models, and to progress towards a material model that can predict bagasse behaviour adequately. Tests were carried out using a modified direct shear test equipment and procedure at most of the large range of pressures occurring in the crushing process. The investigation included an assessment of the performance of the direct shear test for measuring bagasse behaviour. The assessment was carried out using finite element modelling. It was shown that prepared cane and bagasse exhibited critical state behavior similar to that of soils and the magnitudes of material parameters were determined. The measurements were used to identify desirable features for a bagasse material model. It was shown that currently used material models had major limitations for reproducing bagasse behaviour. A model from the soil mechanics literature was modified and shown to achieve improved reproduction while using magnitudes of material parameters that better reflected the measured values. Finally, a typical three roller mill pressure feeder configuration was modelled. The predictions and limitations were assessed by comparison to measured data from a sugar factory.
Resumo:
Computer modelling has been used extensively in some processes in the sugar industry to achieve significant gains. This paper reviews the investigations carried out over approximately the last twenty five years,including the successes but also areas where problems and delays have been encountered. In that time the capability of both hardware and software have increased dramatically. For some processes such as cane cleaning, cane billet preparation, and sugar drying, the application of computer modelling towards improved equipment design and operation has been quite limited. A particular problem has been the large number of particles and particle interactions in these applications, which, if modelled individually, is computationally very intensive. Despite the problems, some attempts have already been made and knowledge gained on tackling these issues. Even if the detailed modelling is wanting, a model can provide some useful insights into the processes. Some options to attack these more intensive problems include the use of commercial software packages, which are usually very robust and allow the addition of user-supplied subroutines to adapt the software to particular problems. Suppliers of such software usually charge a fee per CPU licence, which is often problematic for large problems that require the use of many CPUs. Another option to consider is using open source software that has been developed with the capability to access large parallel resources. Such software has the added advantage of access to the full internal coding. This paper identifies and discusses the detail of software options with the potential capability to achieve improvements in the sugar industry.
Resumo:
A better understanding of the behaviour of prepared cane and bagasse, and the ability to model the mechanical behaviour of bagasse as it is squeezed in a milling unit to extract juice, would help identify how to improve the current process, for example to reduce final bagasse moisture. Previous investigations have proven that juice flow through bagasse obeys Darcy’s permeability law, that the grip of the rough surface of the grooves on the bagasse can be represented by the Mohr-Coulomb failure criterion for soils, and that the internal mechanical behaviour of the bagasse is critical state behaviour similar to that for sand and clay. Current Finite Element Models (FEM) available in commercial software have adequate permeability models. However, no commercially available software seems to contain an adequate mechanical model for bagasse. The same software contains a few material models for soil and other materials, while the coding of hundreds of developed models for soil and other materials remains confidential at universities and government research centres. Progress has been made in the last ten years towards implementing a mechanical model for bagasse in finite element software code. This paper builds on that progress and carries out a further step towards obtaining an adequate material model. The fifth and final loading condition outlined previously, shearing of heavily over-consolidated bagasse, is outlined.
Resumo:
The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.