938 resultados para 700100 Computer Software and Services
Resumo:
With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.
Resumo:
Concerns over the security and privacy of patient information are one of the biggest hindrances to sharing health information and the wide adoption of eHealth systems. At present, there are competing requirements between healthcare consumers' (i.e. patients) requirements and healthcare professionals' (HCP) requirements. While consumers want control over their information, healthcare professionals want access to as much information as required in order to make well-informed decisions and provide quality care. In order to balance these requirements, the use of an Information Accountability Framework devised for eHealth systems has been proposed. In this paper, we take a step closer to the adoption of the Information Accountability protocols and demonstrate their functionality through an implementation in FluxMED, a customisable EHR system.
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.
Resumo:
Genetic mark–recapture requires efficient methods of uniquely identifying individuals. 'Shadows' (individuals with the same genotype at the selected loci) become more likely with increasing sample size, and bias harvest rate estimates. Finding loci is costly, but better loci reduce analysis costs and improve power. Optimal microsatellite panels minimize shadows, but panel design is a complex optimization process. locuseater and shadowboxer permit power and cost analysis of this process and automate some aspects, by simulating the entire experiment from panel design to harvest rate estimation.
Resumo:
Forage budgeting, land condition monitoring and maintaining ground cover residuals are critical management practices for the long term sustainability of the northern grazing industry. The aim of this project is to do a preliminary investigation into industry need, feasibility and willingness to adopt a simple to use hand-held hardware device and compatible, integrated software applications that can be used in the paddock by producers, to assist in land condition monitoring and forage budgeting for better Grazing Land Management and to assist with compliance.
Resumo:
Development of an internet based spatial data delivery and reporting system for the Australian Cotton Industry.
Resumo:
Introduction QC, EQA and method evaluation are integral to delivery of quality patient results. To ensure QUT graduates have a solid grounding in these key areas of practice, a theory-to-practice approach is used to progressively develop and consolidate these skills. Methods Using a BCG assay for serum albumin, each student undertakes an eight week project analysing two levels of QC alongside ‘patient’ samples. Results are assessed using both single rules and Multirules. Concomitantly with the QC analyses, an EQA project is undertaken; students analyse two EQA samples, twice in the semester. Results are submitted using cloud software and data for the full ‘peer group’ returned to students in spreadsheets and incomplete Youden plots. Youden plots are completed with target values and calculated ALP values and analysed for ‘lab’ and method performance. The method has a low-level positive bias, which leads to the need to investigate an alternative method. Building directly on the EQA of the first project and using the scenario of a lab that services renal patients, students undertake a method validation comparing BCP and BCG assays in another eight-week project. Precision and patient comparison studies allow students to assess whether the BCP method addresses the proportional bias of the BCG method and overall is a ‘better’ alternative method for analysing serum albumin, accounting for pragmatic factors, such as cost, as well as performance characteristics. Results Students develop understanding of the purpose and importance of QC and EQA in delivering quality results, the need to optimise testing to deliver quality results and importantly, a working knowledge of the analyses that go into ensuring this quality. In parallel to developing these key workplace competencies, students become confident, competent practitioners, able to pipette accurately and precisely and organise themselves in a busy, time pressured work environment.