327 resultados para Data Accuracy

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high degree of variability and inconsistency in cash flow study usage by property professionals demands improvement in knowledge and processes. Until recently limited research was being undertaken on the use of cash flow studies in property valuations but the growing acceptance of this approach for major investment valuations has resulted in renewed interest in this topic. Studies on valuation variations identify data accuracy, model consistency and bias as major concerns. In cash flow studies there are practical problems with the input data and the consistency of the models. This study will refer to the recent literature and identify the major factors in model inconsistency and data selection. A detailed case study will be used to examine the effects of changes in structure and inputs. The key variable inputs will be identified and proposals developed to improve the selection process for these key variables. The variables will be selected with the aid of sensitivity studies and alternative ways of quantifying the key variables explained. The paper recommends, with reservations, the use of probability profiles of the variables and the incorporation of this data in simulation exercises. The use of Monte Carlo simulation is demonstrated and the factors influencing the structure of the probability distributions of the key variables are outline. This study relates to ongoing research into functional performance of commercial property within an Australian Cooperative Research Centre.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction This study investigated the sensitivity of calculated stereotactic radiotherapy and radiosurgery doses to the accuracy of the beam data used by the treatment planning system. Methods Two sets of field output factors were acquired using fields smaller than approximately 1 cm2, for inclusion in beam data used by the iPlan treatment planning system (Brainlab, Feldkirchen, Germany). One set of output factors were measured using an Exradin A16 ion chamber (Standard Imaging, Middleton, USA). Although this chamber has a relatively small collecting volume (0.007 cm3), measurements made in small fields using this chamber are subject to the effects of volume averaging, electronic disequilibrium and chamber perturbations. The second, more accurate, set of measurements were obtained by applying perturbation correction factors, calculated using Monte Carlo simulations according to a method recommended by Cranmer-Sargison et al. [1] to measurements made using a 60017 unshielded electron diode (PTW, Freiburg, Germany). A series of 12 sample patient treatments were used to investigate the effects of beam data accuracy on resulting planned dose. These treatments, which involved 135 fields, were planned for delivery via static conformal arcs and 3DCRT techniques, to targets ranging from prostates (up to 8 cm across) to meningiomas (usually more than 2 cm across) to arterioveinous malformations, acoustic neuromas and brain metastases (often less than 2 cm across). Isocentre doses were calculated for all of these fields using iPlan, and the results of using the two different sets of beam data were evaluated. Results While the isocentre doses for many fields are identical (difference = 0.0 %), there is a general trend for the doses calculated using the data obtained from corrected diode measurements to exceed the doses calculated using the less-accurate Exradin ion chamber measurements (difference\0.0 %). There are several alarming outliers (circled in the Fig. 1) where doses differ by more than 3 %, in beams from sample treatments planned for volumes up to 2 cm across. Discussion and conclusions These results demonstrate that treatment planning dose calculations for SRT/SRS treatments can be substantially affected when beam data for fields smaller than approximately 1 cm2 are measured inaccurately, even when treatment volumes are up to 2 cm across.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Clinical information systems have become important tools in contemporary clinical patient care. However, there is a question of whether the current clinical information systems are able to effectively support clinicians in decision making processes. We conducted a survey to identify some of the decision making issues related to the use of existing clinical information systems. The survey was conducted among the end users of the cardiac surgery unit, quality and safety unit, intensive care unit and clinical costing unit at The Prince Charles Hospital (TPCH). Based on the survey results and reviewed literature, it was identified that support from the current information systems for decision-making is limited. Also, survey results showed that the majority of respondents considered lack in data integration to be one of the major issues followed by other issues such as limited access to various databases, lack of time and lack in efficient reporting and analysis tools. Furthermore, respondents pointed out that data quality is an issue and the three major data quality issues being faced are lack of data completeness, lack in consistency and lack in data accuracy. Conclusion: Current clinical information systems support for the decision-making processes in Cardiac Surgery in this institution is limited and this could be addressed by integrating isolated clinical information systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Australia has commenced public reporting and benchmarking of healthcare associated infections (HAIs), despite not having a standardised national HAI surveillance program. Annual hospital Staphylococcus aureus bloodstream (SAB) infection rates are released online, with other HAIs likely to be reported in the future. Although there are known differences between hospitals in Australian HAI surveillance programs, the effect of these differences on reported HAI rates is not known. Objective To measure the agreement in HAI identification, classification, and calculation of HAI rates, and investigate the influence of differences amongst those undertaking surveillance on these outcomes. Methods A cross-sectional online survey exploring HAI surveillance practices was administered to infection prevention nurses who undertake HAI surveillance. Seven clinical vignettes describing HAI scenarios were included to measure agreement in HAI identification, classification, and calculation of HAI rates. Data on characteristics of respondents was also collected. Three of the vignettes were related to surgical site infection and four to bloodstream infection. Agreement levels for each of the vignettes were calculated. Using the Australian SAB definition, and the National Health and Safety Network definitions for other HAIs, we looked for an association between the proportion of correct answers and the respondents’ characteristics. Results Ninety-two infection prevention nurses responded to the vignettes. One vignette demonstrated 100 % agreement from responders, whilst agreement for the other vignettes varied from 53 to 75 %. Working in a hospital with more than 400 beds, working in a team, and State or Territory was associated with a correct response for two of the vignettes. Those trained in surveillance were more commonly associated with a correct response, whilst those working part-time were less likely to respond correctly. Conclusion These findings reveal the need for further HAI surveillance support for those working part-time and in smaller facilities. It also confirms the need to improve uniformity of HAI surveillance across Australian hospitals, and raises questions on the validity of the current comparing of national HAI SAB rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare, in patients with cancer and in healthy subjects, measured resting energy expenditure (REE) from traditional indirect calorimetry to a new portable device (MedGem) and predicted REE. DESIGN: Cross-sectional clinical validation study. SETTING: Private radiation oncology centre, Brisbane, Australia. SUBJECTS: Cancer patients (n = 18) and healthy subjects (n = 17) aged 37-86 y, with body mass indices ranging from 18 to 42 kg/m(2). INTERVENTIONS: Oxygen consumption (VO(2)) and REE were measured by VMax229 (VM) and MedGem (MG) indirect calorimeters in random order after a 12-h fast and 30-min rest. REE was also calculated from the MG without adjustment for nitrogen excretion (MGN) and estimated from Harris-Benedict prediction equations. Data were analysed using the Bland and Altman approach, based on a clinically acceptable difference between methods of 5%. RESULTS: The mean bias (MGN-VM) was 10% and limits of agreement were -42 to 21% for cancer patients; mean bias -5% with limits of -45 to 35% for healthy subjects. Less than half of the cancer patients (n = 7, 46.7%) and only a third (n = 5, 33.3%) of healthy subjects had measured REE by MGN within clinically acceptable limits of VM. Predicted REE showed a mean bias (HB-VM) of -5% for cancer patients and 4% for healthy subjects, with limits of agreement of -30 to 20% and -27 to 34%, respectively. CONCLUSIONS: Limits of agreement for the MG and Harris Benedict equations compared to traditional indirect calorimetry were similar but wide, indicating poor clinical accuracy for determining the REE of individual cancer patients and healthy subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the problem of using the data mining models in a real-world situation where the user can not provide all the inputs with which the predictive model is built. A learning system framework, Query Based Learning System (QBLS), is developed for improving the performance of the predictive models in practice where not all inputs are available for querying to the system. The automatic feature selection algorithm called Query Based Feature Selection (QBFS) is developed for selecting features to obtain a balance between the relative minimum subset of features and the relative maximum classification accuracy. Performance of the QBLS system and the QBFS algorithm is successfully demonstrated with a real-world application

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of data derived from linked-segment models depends on how well the system has been represented. Previous investigations describing the gait of persons with partial foot amputation did not account for the unique anthropometry of the residuum or the inclusion of a prosthesis and footwear in the model and, as such, are likely to have underestimated the magnitude of the peak joint moments and powers. This investigation determined the effect of inaccuracies in the anthropometric input data on the kinetics of gait. Toward this end, a geometric model was developed and validated to estimate body segment parameters of various intact and partial feet. These data were then incorporated into customized linked-segment models, and the kinetic data were compared with that obtained from conventional models. Results indicate that accurate modeling increased the magnitude of the peak hip and knee joint moments and powers during terminal swing. Conventional inverse dynamic models are sufficiently accurate for research questions relating to stance phase. More accurate models that account for the anthropometry of the residuum, prosthesis, and footwear better reflect the work of the hip extensors and knee flexors to decelerate the limb during terminal swing phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper analyses the expected value of OD volumes from probe with fixed error, error that is proportional to zone size and inversely proportional to zone size. To add realism to the analysis, real trip ODs in the Tokyo Metropolitan Region are synthesised. The results show that for small zone coding with average radius of 1.1km, and fixed measurement error of 100m, an accuracy of 70% can be expected. The equivalent accuracy for medium zone coding with average radius of 5km would translate into a fixed error of approximately 300m. As expected small zone coding is more sensitive than medium zone coding as the chances of the probe error envelope falling into adjacent zones are higher. For the same error radii, error proportional to zone size would deliver higher level of accuracy. As over half (54.8%) of the trip ends start or end at zone with equivalent radius of ≤ 1.2 km and only 13% of trips ends occurred at zones with equivalent radius ≥2.5km, measurement error that is proportional to zone size such as mobile phone would deliver higher level of accuracy. The synthesis of real OD with different probe error characteristics have shown that expected value of >85% is difficult to achieve for small zone coding with average radius of 1.1km. For most transport applications, OD matrix at medium zone coding is sufficient for transport management. From this study it can be drawn that GPS with error range between 2 and 5m, and at medium zone coding (average radius of 5km) would provide OD estimates greater than 90% of the expected value. However, for a typical mobile phone operating error range at medium zone coding the expected value would be lower than 85%. This paper assumes transmission of one origin and one destination positions from the probe. However, if multiple positions within the origin and destination zones are transmitted, map matching to transport network could be performed and it would greatly improve the accuracy of the probe data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.