909 resultados para Data accuracy
Resumo:
This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.
Resumo:
The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.
Resumo:
AIMS: We conducted a meta-analysis to evaluate the accuracy of quantitative stress myocardial contrast echocardiography (MCE) in coronary artery disease (CAD). METHODS AND RESULTS: Database search was performed through January 2008. We included studies evaluating accuracy of quantitative stress MCE for detection of CAD compared with coronary angiography or single-photon emission computed tomography (SPECT) and measuring reserve parameters of A, beta, and Abeta. Data from studies were verified and supplemented by the authors of each study. Using random effects meta-analysis, we estimated weighted mean difference (WMD), likelihood ratios (LRs), diagnostic odds ratios (DORs), and summary area under curve (AUC), all with 95% confidence interval (CI). Of 1443 studies, 13 including 627 patients (age range, 38-75 years) and comparing MCE with angiography (n = 10), SPECT (n = 1), or both (n = 2) were eligible. WMD (95% CI) were significantly less in CAD group than no-CAD group: 0.12 (0.06-0.18) (P < 0.001), 1.38 (1.28-1.52) (P < 0.001), and 1.47 (1.18-1.76) (P < 0.001) for A, beta, and Abeta reserves, respectively. Pooled LRs for positive test were 1.33 (1.13-1.57), 3.76 (2.43-5.80), and 3.64 (2.87-4.78) and LRs for negative test were 0.68 (0.55-0.83), 0.30 (0.24-0.38), and 0.27 (0.22-0.34) for A, beta, and Abeta reserves, respectively. Pooled DORs were 2.09 (1.42-3.07), 15.11 (7.90-28.91), and 14.73 (9.61-22.57) and AUCs were 0.637 (0.594-0.677), 0.851 (0.828-0.872), and 0.859 (0.842-0.750) for A, beta, and Abeta reserves, respectively. CONCLUSION: Evidence supports the use of quantitative MCE as a non-invasive test for detection of CAD. Standardizing MCE quantification analysis and adherence to reporting standards for diagnostic tests could enhance the quality of evidence in this field.
Resumo:
BACKGROUND AND AIMS: Internet-based surveys provide a potentially important tool for Inflammatory Bowel Disease (IBD) research. The advantages include low cost, large numbers of participants, rapid study completion and less extensive infrastructure than traditional methods. The aim was to determine the accuracy of patient self-reporting in internet-based IBD research and identify predictors of greater reliability. METHODS: 197 patients from a tertiary care center answered an online survey concerning personal medical history and an evaluation of disease specific knowledge. Self-reported medical details were compared with data abstracted from medical records. Agreement was assessed by kappa (κ) statistics. RESULTS: Participants responded correctly with excellent agreement (κ=0.96-0.97) on subtype of IBD and history of surgery. The agreement was also excellent for colectomy (κ=0.88) and small bowel resection (κ=0.91), moderate for abscesses and fistulas (κ=0.60 and 0.63), but poor regarding partial colectomy (κ=0.39). Time since last colonoscopy was self-reported with better agreement (κ=0.84) than disease activity. For disease location/extent, moderate agreements at κ=69% and 64% were observed for patients with Crohn's disease and ulcerative colitis, respectively. Subjects who scored higher than the average in the IBD knowledge assessment were significantly more accurate about disease location than their complementary group (74% vs. 59%, p=0.02). CONCLUSION: This study demonstrates that IBD patients accurately report their medical history regarding type of disease and surgical procedures. More detailed medical information is less reliably reported. Disease knowledge assessment may help in identifying the most accurate individuals and could therefore serve as validity criteria. Internet-based surveys are feasible with high reliability about basic disease features only. However, the participants in this study were engaged at a tertiary center, which potentially leads to a bias and compromises generalization to an unfiltered patient group.
Resumo:
In spring 2012 CERN provided two weeks of a short bunch proton beam dedicated to the neutrino velocity measurement over a distance of 730 km. The OPERA neutrino experiment at the underground Gran Sasso Laboratory used an upgraded setup compared to the 2011 measurements, improving the measurement time accuracy. An independent timing system based on the Resistive Plate Chambers was exploited providing a time accuracy of ∼1 ns. Neutrino and anti-neutrino contributions were separated using the information provided by the OPERA magnetic spectrometers. The new analysis profited from the precision geodesy measurements of the neutrino baseline and of the CNGS/LNGS clock synchronization. The neutrino arrival time with respect to the one computed assuming the speed of light in vacuum is found to be δtν≡TOFc−TOFν=(0.6±0.4 (stat.)±3.0 (syst.)) ns and δtν¯≡TOFc−TOFν¯=(1.7±1.4 (stat.)±3.1 (syst.)) ns for νμ and ν¯μ, respectively. This corresponds to a limit on the muon neutrino velocity with respect to the speed of light of −1.8×10−6<(vν−c)/c<2.3×10−6 at 90% C.L. This new measurement confirms with higher accuracy the revised OPERA result.
Resumo:
BACKGROUND The nine equivalents of nursing manpower use score (NEMS) is used to evaluate critical care nursing workload and occasionally to define hospital reimbursements. Little is known about the caregivers' accuracy in scoring, about factors affecting this accuracy and how validity of scoring is assured. METHODS Accuracy in NEMS scoring of Swiss critical care nurses was assessed using case vignettes. An online survey was performed to assess training and quality control of NEMS scoring and to collect structural and organizational data of participating intensive care units (ICUs). Aggregated structural and procedural data of the Swiss ICU Minimal Data Set were used for matching. RESULTS Nursing staff from 64 (82%) of the 78 certified adult ICUs participated in this survey. Training and quality control of scoring shows large variability between ICUs. A total of 1378 nurses scored one out of 20 case vignettes: accuracy ranged from 63.7% (intravenous medications) to 99.1% (basic monitoring). Erroneous scoring (8.7% of all items) was more frequent than omitted scoring (3.2%). Mean NEMS per case was 28.0 ± 11.8 points (reference score: 25.7 ± 14.2 points). Mean bias was 2.8 points (95% confidence interval: 1.0-4.7); scores below 37.1 points were generally overestimated. Data from units with a greater nursing management staff showed a higher bias. CONCLUSION Overall, nurses assess the NEMS score within a clinically acceptable range. Lower scores are generally overestimated. Inaccurate assessment was associated with a greater size of the nursing management staff. Swiss head nurses consider themselves motivated to assure appropriate scoring and its validation.
Resumo:
Satellite remote sensing provides a powerful instrument for mapping and monitoring traces of historical settlements and infrastructure, not only in distant areas and crisis regions. It helps archaeologists to embed their findings from field surveys into the broader context of the landscape. With the start of the TanDEM-X mission, spatially explicit 3D-information is available to researchers at an unprecedented resolution worldwide. We examined different experimental TanDEM-X digital elevation models (DEM) that were processed from two different imaging modes (Stripmap/High Resolution Spotlight) using the operational alternating bistatic acquisition mode. The quality and accuracy of the experimental DEM products was compared to other available DEM products and a high precision archaeological field survey. The results indicate the potential of TanDEM-X Stripmap (SM) data for mapping surface elements at regional scale. For the alluvial plain of Cilicia, a suspected palaeochannel could be reconstructed. At the local scale, DEM products from TanDEM-X High Resolution Spotlight (HS) mode were processed at 2 m spatial resolution using a merge of two monostatic/bistatic interferograms. The absolute and relative vertical accuracy of the outcome meet the specification of high resolution elevation data (HRE) standards from the National System for Geospatial Intelligence (NSG) at the HRE20 level.
Resumo:
Lake water temperature (LWT) is an important driver of lake ecosystems and it has been identified as an indicator of climate change. Consequently, the Global Climate Observing System (GCOS) lists LWT as an essential climate variable. Although for some European lakes long in situ time series of LWT do exist, many lakes are not observed or only on a non-regular basis making these observations insufficient for climate monitoring. Satellite data can provide the information needed. However, only few satellite sensors offer the possibility to analyse time series which cover 25 years or more. The Advanced Very High Resolution Radiometer (AVHRR) is among these and has been flown as a heritage instrument for almost 35 years. It will be carried on for at least ten more years, offering a unique opportunity for satellite-based climate studies. Herein we present a satellite-based lake surface water temperature (LSWT) data set for European water bodies in or near the Alps based on the extensive AVHRR 1 km data record (1989–2013) of the Remote Sensing Research Group at the University of Bern. It has been compiled out of AVHRR/2 (NOAA-07, -09, -11, -14) and AVHRR/3 (NOAA-16, -17, -18, -19 and MetOp-A) data. The high accuracy needed for climate related studies requires careful pre-processing and consideration of the atmospheric state. The LSWT retrieval is based on a simulation-based scheme making use of the Radiative Transfer for TOVS (RTTOV) Version 10 together with ERA-interim reanalysis data from the European Centre for Medium-range Weather Forecasts. The resulting LSWTs were extensively compared with in situ measurements from lakes with various sizes between 14 and 580 km2 and the resulting biases and RMSEs were found to be within the range of −0.5 to 0.6 K and 1.0 to 1.6 K, respectively. The upper limits of the reported errors could be rather attributed to uncertainties in the data comparison between in situ and satellite observations than inaccuracies of the satellite retrieval. An inter-comparison with the standard Moderate-resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature product exhibits RMSEs and biases in the range of 0.6 to 0.9 and −0.5 to 0.2 K, respectively. The cross-platform consistency of the retrieval was found to be within ~ 0.3 K. For one lake, the satellite-derived trend was compared with the trend of in situ measurements and both were found to be similar. Thus, orbital drift is not causing artificial temperature trends in the data set. A comparison with LSWT derived through global sea surface temperature (SST) algorithms shows lower RMSEs and biases for the simulation-based approach. A running project will apply the developed method to retrieve LSWT for all of Europe to derive the climate signal of the last 30 years. The data are available at doi:10.1594/PANGAEA.831007.
Resumo:
OBJECTIVE To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. METHODS An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. RESULTS More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. CONCLUSION STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers.
Resumo:
BACKGROUND The accuracy of CT pulmonary angiography (CTPA) in detecting or excluding pulmonary embolism has not yet been assessed in patients with high body weight (BW). METHODS This retrospective study involved CTPAs of 114 patients weighing 75-99 kg and those of 123 consecutive patients weighing 100-150 kg. Three independent blinded radiologists analyzed all examinations in randomized order. Readers' data on pulmonary emboli were compared with a composite reference standard, comprising clinical probability, reference CTPA result, additional imaging when performed and 90-day follow-up. Results in both BW groups and in two body mass index (BMI) groups (BMI <30 kg/m(2) and BMI ≥ 30 kg/m(2), i.e., non-obese and obese patients) were compared. RESULTS The prevalence of pulmonary embolism was not significantly different in the BW groups (P=1.0). The reference CTPA result was positive in 23 of 114 patients in the 75-99 kg group and in 25 of 123 patients in the ≥ 100 kg group, respectively (odds ratio, 0.991; 95% confidence interval, 0.501 to 1.957; P=1.0). No pulmonary embolism-related death or venous thromboembolism occurred during follow-up. The mean accuracy of three readers was 91.5% in the 75-99 kg group and 89.9% in the ≥ 100 kg group (odds ratio, 1.207; 95% confidence interval, 0.451 to 3.255; P=0.495), and 89.9% in non-obese patients and 91.2% in obese patients (odds ratio, 0.853; 95% confidence interval, 0.317 to 2.319; P=0.816). CONCLUSION The diagnostic accuracy of CTPA in patients weighing 75-99 kg or 100-150 kg proved not to be significantly different.
Resumo:
This paper proposed an automated 3D lumbar intervertebral disc (IVD) segmentation strategy from MRI data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based approach. After that, a three-dimensional (3D) variable-radius soft tube model of the lumbar spine column is built to guide the 3D disc segmentation. The disc segmentation is achieved as a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Resumo:
Many of the interesting physics processes to be measured at the LHC have a signature involving one or more isolated electrons. The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton–proton collision data collected in 2011 at √s = 7 TeV and corresponding to an integrated luminosity of 4.7 fb−1. Tag-and-probe methods using events with leptonic decays of W and Z bosons and J/ψ mesons are employed to benchmark these performance parameters. The combination of all measurements results in identification efficiencies determined with an accuracy at the few per mil level for electron transverse energy greater than 30 GeV.