382 resultados para IMAGING TECHNIQUES
Resumo:
The main aim of this thesis is to analyse and optimise a public hospital Emergency Department. The Emergency Department (ED) is a complex system with limited resources and a high demand for these resources. Adding to the complexity is the stochastic nature of almost every element and characteristic in the ED. The interaction with other functional areas also complicates the system as these areas have a huge impact on the ED and the ED is powerless to change them. Therefore it is imperative that OR be applied to the ED to improve the performance within the constraints of the system. The main characteristics of the system to optimise included tardiness, adherence to waiting time targets, access block and length of stay. A validated and verified simulation model was built to model the real life system. This enabled detailed analysis of resources and flow without disruption to the actual ED. A wide range of different policies for the ED and a variety of resources were able to be investigated. Of particular interest was the number and type of beds in the ED and also the shift times of physicians. One point worth noting was that neither of these resources work in isolation and for optimisation of the system both resources need to be investigated in tandem. The ED was likened to a flow shop scheduling problem with the patients and beds being synonymous with the jobs and machines typically found in manufacturing problems. This enabled an analytic scheduling approach. Constructive heuristics were developed to reactively schedule the system in real time and these were able to improve the performance of the system. Metaheuristics that optimised the system were also developed and analysed. An innovative hybrid Simulated Annealing and Tabu Search algorithm was developed that out-performed both simulated annealing and tabu search algorithms by combining some of their features. The new algorithm achieves a more optimal solution and does so in a shorter time.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose.
Resumo:
The exhibition consists of a series of 9 large-scale cotton rag prints, printed from digital files, and a sound and picture animation on DVD composed of drawings, sound, analogue and digital photographs, and Super 8 footage. The exhibition represents the artist’s experience of Singapore during her residency. Source imagery was gathered from photographs taken at the Bukit Brown abandoned Chinese Cemetery in Singapore, and Australian native gardens in Parkville Melbourne. Historical sources include re-photographed Singapore 19th and early 20th century postcard images. The works use analogue, hand-drawn and digital imaging, still and animated, to explore the digital interface’s ability to combine mixed media. This practice stems from the digital imaging practice of layering, using various media editing software. The work is innovative in that it stretches the idea of the layer composition in a single image by setting each layer into motion using animation techniques. This creates a multitude of permutations and combinations as the two layers move in different rhythmic patterns. The work also represents an innovative collaboration between the photographic practitioner and a sound composer, Duncan King-Smith, who designed sound for the animation based on concepts of trance, repetition and abstraction. As part of the Art ConneXions program, the work travelled to numerous international venues including: Space 217 Singapore, RMIT Gallery Melbourne, National Museum Jakarta, Vietnam Fine Arts Museum Hanoi, and ifa (Institut fur Auslandsbeziehungen) Gallery in both Stuttgart and Berlin.
Resumo:
Open pit mine operations are complex businesses that demand a constant assessment of risk. This is because the value of a mine project is typically influenced by many underlying economic and physical uncertainties, such as metal prices, metal grades, costs, schedules, quantities, and environmental issues, among others, which are not known with much certainty at the beginning of the project. Hence, mining projects present a considerable challenge to those involved in associated investment decisions, such as the owners of the mine and other stakeholders. In general terms, when an option exists to acquire a new or operating mining project, , the owners and stock holders of the mine project need to know the value of the mining project, which is the fundamental criterion for making final decisions about going ahead with the venture capital. However, obtaining the mine project’s value is not an easy task. The reason for this is that sophisticated valuation and mine optimisation techniques, which combine advanced theories in geostatistics, statistics, engineering, economics and finance, among others, need to be used by the mine analyst or mine planner in order to assess and quantify the existing uncertainty and, consequently, the risk involved in the project investment. Furthermore, current valuation and mine optimisation techniques do not complement each other. That is valuation techniques based on real options (RO) analysis assume an expected (constant) metal grade and ore tonnage during a specified period, while mine optimisation (MO) techniques assume expected (constant) metal prices and mining costs. These assumptions are not totally correct since both sources of uncertainty—that of the orebody (metal grade and reserves of mineral), and that about the future behaviour of metal prices and mining costs—are the ones that have great impact on the value of any mining project. Consequently, the key objective of this thesis is twofold. The first objective consists of analysing and understanding the main sources of uncertainty in an open pit mining project, such as the orebody (in situ metal grade), mining costs and metal price uncertainties, and their effect on the final project value. The second objective consists of breaking down the wall of isolation between economic valuation and mine optimisation techniques in order to generate a novel open pit mine evaluation framework called the ―Integrated Valuation / Optimisation Framework (IVOF)‖. One important characteristic of this new framework is that it incorporates the RO and MO valuation techniques into a single integrated process that quantifies and describes uncertainty and risk in a mine project evaluation process, giving a more realistic estimate of the project’s value. To achieve this, novel and advanced engineering and econometric methods are used to integrate financial and geological uncertainty into dynamic risk forecasting measures. The proposed mine valuation/optimisation technique is then applied to a real gold disseminated open pit mine deposit to estimate its value in the face of orebody, mining costs and metal price uncertainties.
Resumo:
The existing Collaborative Filtering (CF) technique that has been widely applied by e-commerce sites requires a large amount of ratings data to make meaningful recommendations. It is not directly applicable for recommending products that are not frequently purchased by users, such as cars and houses, as it is difficult to collect rating data for such products from the users. Many of the e-commerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user's query are retrieved and recommended to the user. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their online navigation behaviour. This paper proposes to integrate collaborative filtering and search-based techniques to provide personalized recommendations for infrequently purchased products. Two different techniques are proposed, namely CFRRobin and CFAg Query. Instead of using the target user's query to search for products as normal search based systems do, the CFRRobin technique uses the products in which the target user's neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAg Query technique uses the products that the user's neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAg Query perform better than the standard Collaborative Filtering (CF) and the Basic Search (BS) approaches, which are widely applied by the current e-commerce applications. The CFRRobin and CFAg Query approaches also outperform the e- isting query expansion (QE) technique that was proposed for recommending infrequently purchased products.
Resumo:
The application of computer-aided design and manufacturing (CAD/CAM) techniques in the clinic is growing slowly but steadily. The ability to build patient-specific models based on medical imaging data offers major potential. In this work we report on the feasibility of employing laser scanning with CAD/CAM techniques to aid in breast reconstruction. A patient was imaged with laser scanning, an economical and facile method for creating an accurate digital representation of the breasts and surrounding tissues. The obtained model was used to fabricate a customized mould that was employed as an intra-operative aid for the surgeon performing autologous tissue reconstruction of the breast removed due to cancer. Furthermore, a solid breast model was derived from the imaged data and digitally processed for the fabrication of customized scaffolds for breast tissue engineering. To this end, a novel generic algorithm for creating porosity within a solid model was developed, using a finite element model as intermediate.
Resumo:
Background: Access to cardiac services is essential for appropriate implementation of evidence-based therapies to improve outcomes. The Cardiac Accessibility and Remoteness Index for Australia (Cardiac ARIA) aimed to derive an objective, geographic measure reflecting access to cardiac services. Methods: An expert panel defined an evidence-based clinical pathway. Using Geographic Information Systems (GIS), a numeric/alpha index was developed at two points along the continuum of care. The acute category (numeric) measured the time from the emergency call to arrival at an appropriate medical facility via road ambulance. The aftercare category (alpha) measured access to four basic services (family doctor, pharmacy, cardiac rehabilitation, and pathology services) when a patient returned to their community. Results: The numeric index ranged from 1 (access to principle referral center with cardiac catheterization service ≤ 1 hour) to 8 (no ambulance service, > 3 hours to medical facility, air transport required). The alphabetic index ranged from A (all 4 services available within 1 hour drive-time) to E (no services available within 1 hour). 13.9 million (71%) Australians resided within Cardiac ARIA 1A locations (hospital with cardiac catheterization laboratory and all aftercare within 1 hour). Those outside Cardiac 1A were over-represented by people aged over 65 years (32%) and Indigenous people (60%). Conclusion: The Cardiac ARIA index demonstrated substantial inequity in access to cardiac services in Australia. This methodology can be used to inform cardiology health service planning and the methodology could be applied to other common disease states within other regions of the world.
Resumo:
This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.
Resumo:
A range of varying chromophore nitroxide free radicals and their nonradical methoxyamine analogues were synthesized and their linear photophysical properties examined. The presence of the proximate free radical masks the chromophore’s usual fluorescence emission, and these species are described as profluorescent. Two nitroxides incorporating anthracene and fluorescein chromophores (compounds 7 and 19, respectively) exhibited two-photon absorption (2PA) cross sections of approximately 400 G.M. when excited at wavelengths greater than 800 nm. Both of these profluorescent nitroxides demonstrated low cytotoxicity toward Chinese hamster ovary (CHO) cells. Imaging colocalization experiments with the commercially available CellROX Deep Red oxidative stress monitor demonstrated good cellular uptake of the nitroxide probes. Sensitivity of the nitroxide probes to H2O2-induced damage was also demonstrated by both one- and two-photon fluorescence microscopy. These profluorescent nitroxide probes are potentially powerful tools for imaging oxidative stress in biological systems, and they essentially “light up” in the presence of certain species generated from oxidative stress. The high ratio of the fluorescence quantum yield between the profluorescent nitroxide species and their nonradical adducts provides the sensitivity required for measuring a range of cellular redox environments. Furthermore, their reasonable 2PA cross sections provide for the option of using two-photon fluorescence microscopy, which circumvents commonly encountered disadvantages associated with one-photon imaging such as photobleaching and poor tissue penetration.
Resumo:
This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.
Resumo:
A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.
Resumo:
Ultrasound is used extensively in the field of medical imaging. In this paper, the basic principles of ultrasound are explained using ‘everyday’ physics. Topics include the generation of ultrasound, basic interactions with material and the measurement of blood flow using the Doppler effect.
Resumo:
Nineteen studies met the inclusion criteria. A skin temperature reduction of 5–15 °C, in accordance with the recent PRICE (Protection, Rest, Ice, Compression and Elevation) guidelines, were achieved using cold air, ice massage, crushed ice, cryotherapy cuffs, ice pack, and cold water immersion. There is evidence supporting the use and effectiveness of thermal imaging in order to access skin temperature following the application of cryotherapy. Thermal imaging is a safe and non-invasive method of collecting skin temperature. Although further research is required, in terms of structuring specific guidelines and protocols, thermal imaging appears to be an accurate and reliable method of collecting skin temperature data following cryotherapy. Currently there is ambiguity regarding the optimal skin temperature reductions in a medical or sporting setting. However, this review highlights the ability of several different modalities of cryotherapy to reduce skin temperature.