972 resultados para Optimal placements
Resumo:
This paper proposes a new iterative method to achieve an optimally fitting plate for preoperative planning purposes. The proposed method involves integration of four commercially available software tools, Matlab, Rapidform2006, SolidWorks and ANSYS, each performing specific tasks to obtain a plate shape that fits optimally for an individual tibia and is mechanically safe. A typical challenge when crossing multiple platforms is to ensure correct data transfer. We present an example of the implementation of the proposed method to demonstrate successful data transfer between the four platforms and the feasibility of the method.
Resumo:
This paper examines the relationship between a final year tertiary work placement for criminology students at Griffith University in Brisbane and the development of their work self-efficacy. Using a work self-efficacy instrument developed by Professor Joe Raelin at Northeastern University in Boston, a pilot phase in 2006 and a larger study in 2007 investigated the students’ responses across seven self-efficacy factors of learning, problem-solving, teamwork, sensitivity, politics, pressure, and role expectations. Both studies utilised a pre- and post-test and comparisons between these indicated that they believed their abilities to participate constructively in their professional work contexts significantly improved as a result of their placement experience except in the areas of learning, teamwork and sensitivity. This finding will allow us to continue to refine the processes of work placements in order to ensure the integrity of this method for student learning.
Resumo:
Objectives PEPA is funded by the Department of Health and Ageing and aims to further improve the skill and confidence of the generalist workforce to work with people with palliative care needs. Recent quality improvement initiatives to promote transfer of learning into practice include appointment of a clinical educator, implementation of an online module for mentors and delivery of a mentoring workshop (collaborating with NSAP and PCC4U). This paper presents an overview of outcomes from these quality improvement initiatives. Methods PEPA host sites are selected based on their specialist palliative care level. Host site managers are surveyed six-monthly and participants are surveyed pre and three months post-placement to collect open and fixed response data on their experience of the program. Participants in the mentoring workshop (n=39) were asked to respond to a survey regarding the workshop outcomes. Results The percentage of placement participants who strongly agreed they ‘have the ability to implement the interventions required for people who have a life-limiting illness’ increased from 35% in 2011 (n=34) to 51% in 2012 (n=91) post-placement. Responses from mentor workshop participants indicated that 76% of respondents (n=25) agreed that they were able to identify principles for mentoring in the context of palliative care. In 2012, 61% of host site managers (n=54) strongly agreed that PEPA supports clinician working with people with a life-limiting illness. Conclusion Strategies to build the capabilities of palliative care professionals to mentor and support the learning experience of PEPA participants are critical to ongoing improvements of the program.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
Using our porcine model of deep dermal partial thickness burn injury, various cooling techniques (15 degrees C running water, 2 degrees C running water, ice) of first aid were applied for 20 minutes compared with a control (ambient temperature). The subdermal temperatures were monitored during the treatment and wounds observed and photographed weekly for 6 weeks, observing reepithelialization, wound surface area and cosmetic appearance. Tissue histology and scar tensile strength were examined 6 weeks after burn. The 2 degrees C and ice treatments decreased the subdermal temperature the fastest and lowest, however, generally the 15 and 2 degrees C treated wounds had better outcomes in terms of reepithelialization, scar histology, and scar appearance. These findings provide evidence to support the current first aid guidelines of cold tap water (approximately 15 degrees C) for 20 minutes as being beneficial in helping to heal the burn wound. Colder water at 2 degrees C is also beneficial. Ice should not be used.
Resumo:
Using our porcine model of deep dermal partial thickness burn injury, various durations (10min, 20min, 30min or 1h) and delays (immediate, 10min, 1h, 3h) of 15 degrees C running water first aid were applied to burns and compared to untreated controls. The subdermal temperatures were monitored during the treatment and wounds observed weekly for 6 weeks, for re-epithelialisation, wound surface area and cosmetic appearance. At 6 weeks after the burn, tissue biopsies were taken of the scar for histological analysis. Results showed that immediate application of cold running water for 20min duration is associated with an improvement in re-epithelialisation over the first 2 weeks post-burn and decreased scar tissue at 6 weeks. First aid application of cold water for as little as 10min duration or up to 1h delay still provides benefit.
Resumo:
This paper presents an optimisation algorithm to maximize the loadability of single wire earth return (SWER) by minimizing the cost of batteries and regulators considering the voltage constraints and thermal limits. This algorithm, that finds the optimum location of batteries and regulators, uses hybrid discrete particle swarm optimization and mutation (DPSO + Mutation). The simulation results on realistic highly loaded SWER network show the effectiveness of using battery to improve the loadability of SWER network in a cost-effective way. In this case, while only 61% of peak load can be supplied without violating the constraints by existing network, the loadability of the network is increased to peak load by utilizing two battery sites which are located optimally. That is, in a SWER system like the studied one, each installed kVA of batteries, optimally located, supports a loadability increase as 2 kVA.
Resumo:
This paper describes a novel optimum path planning strategy for long duration AUV operations in environments with time-varying ocean currents. These currents can exceed the maximum achievable speed of the AUV, as well as temporally expose obstacles. In contrast to most other path planning strategies, paths have to be defined in time as well as space. The solution described here exploits ocean currents to achieve mission goals with minimal energy expenditure, or a tradeoff between mission time and required energy. The proposed algorithm uses a parallel swarm search as a means to reduce the susceptibility to large local minima on the complex cost surface. The performance of the optimisation algorithms is evaluated in simulation and experimentally with the Starbug AUV using a validated ocean model of Brisbane’s Moreton Bay.
Resumo:
Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.
Resumo:
In this paper, load profile and operational goal are used to find optimal sizing of combined PV-energy storage for a future grid-connected residential building. As part of this approach, five operational goals are introduced and the annual cost for each operation goal has been assessed. Finally, the optimal sizing for combined PV-energy storage has been determined, using direct search method. In addition, sensitivity of the annual cost to different parameters has been analyzed.
Resumo:
In private placement transactions, issuing firms sell a block of securities to just a small group of investors at a discounted price. Non-participating shareholders suffer from ownership dilution and lose the opportunity to receive the discount. This thesis provides the first evidence on whether and how corporate governance can protect non-participating shareholders' interests. Results from an examination of 329 private placements issued by the top 250 Australian firms between 2002 and 2009 demonstrate that firms with higher governance quality are more likely to issue a share purchase plan (SPP) along with the private placement, thus providing greater protection to non-participating shareholders' interests.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.