919 resultados para automatic data entry


Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is currently a strong focus worldwide on the potential of large-scale Electronic Health Record (EHR) systems to cut costs and improve patient outcomes through increased efficiency. This is accomplished by aggregating medical data from isolated Electronic Medical Record databases maintained by different healthcare providers. Concerns about the privacy and reliability of Electronic Health Records are crucial to healthcare service consumers. Traditional security mechanisms are designed to satisfy confidentiality, integrity, and availability requirements, but they fail to provide a measurement tool for data reliability from a data entry perspective. In this paper, we introduce a Medical Data Reliability Assessment (MDRA) service model to assess the reliability of medical data by evaluating the trustworthiness of its sources, usually the healthcare provider which created the data and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record. The result is then expressed by manipulating health record metadata to alert medical practitioners relying on the information to possible reliability problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electronic Health Record (EHR) systems are being introduced to overcome the limitations associated with paper-based and isolated Electronic Medical Record (EMR) systems. This is accomplished by aggregating medical data and consolidating them in one digital repository. Though an EHR system provides obvious functional benefits, there is a growing concern about the privacy and reliability (trustworthiness) of Electronic Health Records. Security requirements such as confidentiality, integrity, and availability can be satisfied by traditional hard security mechanisms. However, measuring data trustworthiness from the perspective of data entry is an issue that cannot be solved with traditional mechanisms, especially since degrees of trust change over time. In this paper, we introduce a Time-variant Medical Data Trustworthiness (TMDT) assessment model to evaluate the trustworthiness of medical data by evaluating the trustworthiness of its sources, namely the healthcare organisation where the data was created and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record, with respect to a certain period of time. The result can then be used by the EHR system to manipulate health record metadata to alert medical practitioners relying on the information to possible reliability problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARDC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents work in progress of EatChaFood – a prototype app designed to increase user knowledge of the currently available domestic supply and location of food, with a view to reducing expired household food waste. In order to reap the benefits that EatChaFood can provide we explore ways to overcome manual data entry as a barrier to use. Our user study has to recognise the limitations of the prototype app, and conduct an evaluation of the interaction design built into the app to promote behaviour change. Innovations in the near future such as the automatic scanning of barcodes on food items or photo-recognition will close the gap between perceived prototype usability and usefulness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An investigation of the construction data management needs of the Florida Department of Transportation (FDOT) with regard to XML standards including development of data dictionary and data mapping. The review of existing XML schemas indicated the need for development of specific XML schemas. XML schemas were developed for all FDOT construction data management processes. Additionally, data entry, approval and data retrieval applications were developed for payroll compliance reporting and pile quantity payment development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Programming for parallel architectures that do not have a shared address space is extremely difficult due to the need for explicit communication between memories of different compute devices. A heterogeneous system with CPUs and multiple GPUs, or a distributed-memory cluster are examples of such systems. Past works that try to automate data movement for distributed-memory architectures can lead to excessive redundant communication. In this paper, we propose an automatic data movement scheme that minimizes the volume of communication between compute devices in heterogeneous and distributed-memory systems. We show that by partitioning data dependences in a particular non-trivial way, one can generate data movement code that results in the minimum volume for a vast majority of cases. The techniques are applicable to any sequence of affine loop nests and works on top of any choice of loop transformations, parallelization, and computation placement. The data movement code generated minimizes the volume of communication for a particular configuration of these. We use a combination of powerful static analyses relying on the polyhedral compiler framework and lightweight runtime routines they generate, to build a source-to-source transformation tool that automatically generates communication code. We demonstrate that the tool is scalable and leads to substantial gains in efficiency. On a heterogeneous system, the communication volume is reduced by a factor of 11X to 83X over state-of-the-art, translating into a mean execution time speedup of 1.53X. On a distributed-memory cluster, our scheme reduces the communication volume by a factor of 1.4X to 63.5X over state-of-the-art, resulting in a mean speedup of 1.55X. In addition, our scheme yields a mean speedup of 2.19X over hand-optimized UPC codes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is widely acknowledged that a company's ability to aquire market share, and hence its profitability, is very closely linked to the speed with which it can produce a new design. Indeed, a study by the U.K. Department of Trade and Industry has shown that the critical factor which determines profitability is the timely delivery of the new product. Late entry to market or high production costs dramatically reduce profits whilst an overrun on development cost has little significant effect. This paper describes a method which aims to assist the designer in producing higher performance turbomachinery designs more quickly by accelerating the process by which they are produced. The adopted approach combines an enhanced version of the 'Signposting' design process management methodology with industry-standard analysis codes and Computational Fluid Dynamics (CFD). It has been specifically configured to enable process-wide iteration, near instantaneous generation of guidance data for the designer and fully automatic data handling. A successful laboratory experiment based on the design of a large High Pressure Steam Turbine is described and this leads on to current work which incorporates the extension of the proven concept to a full industrial application for the design of Aeroengine Compressors with Rolls-Royce plc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radio Frequency Identification (RFID) technology allows automatic data capture from tagged objects moving in a supply chain. This data can be very useful if it is used to answer traceability queries, however it is distributed across many different repositories, owned by different companies. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The detection of dense harmful algal blooms (HABs) by satellite remote sensing is usually based on analysis of chlorophyll-a as a proxy. However, this approach does not provide information about the potential harm of bloom, nor can it identify the dominant species. The developed HAB risk classification method employs a fully automatic data-driven approach to identify key characteristics of water leaving radiances and derived quantities, and to classify pixels into “harmful”, “non-harmful” and “no bloom” categories using Linear Discriminant Analysis (LDA). Discrimination accuracy is increased through the use of spectral ratios of water leaving radiances, absorption and backscattering. To reduce the false alarm rate the data that cannot be reliably classified are automatically labelled as “unknown”. This method can be trained on different HAB species or extended to new sensors and then applied to generate independent HAB risk maps; these can be fused with other sensors to fill gaps or improve spatial or temporal resolution. The HAB discrimination technique has obtained accurate results on MODIS and MERIS data, correctly identifying 89% of Phaeocystis globosa HABs in the southern North Sea and 88% of Karenia mikimotoi blooms in the Western English Channel. A linear transformation of the ocean colour discriminants is used to estimate harmful cell counts, demonstrating greater accuracy than if based on chlorophyll-a; this will facilitate its integration into a HAB early warning system operating in the southern North Sea.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de mestre em Engenharia Civil

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Video of how to set up and SPSS document and enter data

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Video on how to enter data into Excel. This includes setting up column headings, changing sheet names, using lists for categorical items, using validation on columns of data, plus how to check validation is operating correctly.