9 resultados para DETECTION PROBABILITY
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.
Resumo:
Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.
Resumo:
The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.
Resumo:
PURPOSE We prospectively assessed the diagnostic accuracy of diffusion-weighted magnetic resonance imaging for detecting significant prostate cancer. MATERIALS AND METHODS We performed a prospective study of 111 consecutive men with prostate and/or bladder cancer who underwent 3 Tesla diffusion-weighted magnetic resonance imaging of the pelvis without an endorectal coil before radical prostatectomy (78) or cystoprostatectomy (33). Three independent readers blinded to clinical and pathological data assigned a prostate cancer suspicion grade based on qualitative imaging analysis. Final pathology results of prostates with and without cancer served as the reference standard. Primary outcomes were the sensitivity and specificity of diffusion-weighted magnetic resonance imaging for detecting significant prostate cancer with significance defined as a largest diameter of the index lesion of 1 cm or greater, extraprostatic extension, or Gleason score 7 or greater on final pathology assessment. Secondary outcomes were interreader agreement assessed by the Fleiss κ coefficient and image reading time. RESULTS Of the 111 patients 93 had prostate cancer, which was significant in 80 and insignificant in 13, and 18 had no prostate cancer on final pathology results. The sensitivity and specificity of diffusion-weighted magnetic resonance imaging for detecting significant PCa was 89% to 91% and 77% to 81%, respectively, for the 3 readers. Interreader agreement was good (Fleiss κ 0.65 to 0.74). Median reading time was between 13 and 18 minutes. CONCLUSIONS Diffusion-weighted magnetic resonance imaging (3 Tesla) is a noninvasive technique that allows for the detection of significant prostate cancer with high probability without contrast medium or an endorectal coil, and with good interreader agreement and a short reading time. This technique should be further evaluated as a tool to stratify patients with prostate cancer for individualized treatment options.
Resumo:
OBJECTIVE In contrast to conventional breast imaging techniques, one major diagnostic benefit of breast magnetic resonance imaging (MRI) is the simultaneous acquisition of morphologic and dynamic enhancement characteristics, which are based on angiogenesis and therefore provide insights into tumor pathophysiology. The aim of this investigation was to intraindividually compare 2 macrocyclic MRI contrast agents, with low risk for nephrogenic systemic fibrosis, in the morphologic and dynamic characterization of histologically verified mass breast lesions, analyzed by blinded human evaluation and a fully automatic computer-assisted diagnosis (CAD) technique. MATERIALS AND METHODS Institutional review board approval and patient informed consent were obtained. In this prospective, single-center study, 45 women with 51 histopathologically verified (41 malignant, 10 benign) mass lesions underwent 2 identical examinations at 1.5 T (mean time interval, 2.1 days) with 0.1-mmol kg doses of gadoteric acid and gadobutrol. All magnetic resonance images were visually evaluated by 2 experienced, blinded breast radiologists in consensus and by an automatic CAD system, whereas the morphologic and dynamic characterization as well as the final human classification of lesions were performed based on the categories of the Breast imaging reporting and data system MRI atlas. Lesions were also classified by defining their probability of malignancy (morpho-dynamic index; 0%-100%) by the CAD system. Imaging results were correlated with histopathology as gold standard. RESULTS The CAD system coded 49 of 51 lesions with gadoteric acid and gadobutrol (detection rate, 96.1%); initial signal increase was significantly higher for gadobutrol than for gadoteric acid for all and the malignant coded lesions (P < 0.05). Gadoteric acid resulted in more postinitial washout curves and fewer continuous increases of all and the malignant lesions compared with gadobutrol (CAD hot spot regions, P < 0.05). Morphologically, the margins of the malignancies were different between the 2 agents, whereas gadobutrol demonstrated more spiculated and fewer smooth margins (P < 0.05). Lesion classifications by the human observers and by the morpho-dynamic index compared with the histopathologic results did not significantly differ between gadoteric acid and gadobutrol. CONCLUSIONS Macrocyclic contrast media can be reliably used for breast dynamic contrast-enhanced MRI. However, gadoteric acid and gadobutrol differed in some dynamic and morphologic characterization of histologically verified breast lesions in an intraindividual, comparison. Besides the standardization of technical parameters and imaging evaluation of breast MRI, the standardization of the applied contrast medium seems to be important to receive best comparable MRI interpretation.
Resumo:
BACKGROUND The accuracy of CT pulmonary angiography (CTPA) in detecting or excluding pulmonary embolism has not yet been assessed in patients with high body weight (BW). METHODS This retrospective study involved CTPAs of 114 patients weighing 75-99 kg and those of 123 consecutive patients weighing 100-150 kg. Three independent blinded radiologists analyzed all examinations in randomized order. Readers' data on pulmonary emboli were compared with a composite reference standard, comprising clinical probability, reference CTPA result, additional imaging when performed and 90-day follow-up. Results in both BW groups and in two body mass index (BMI) groups (BMI <30 kg/m(2) and BMI ≥ 30 kg/m(2), i.e., non-obese and obese patients) were compared. RESULTS The prevalence of pulmonary embolism was not significantly different in the BW groups (P=1.0). The reference CTPA result was positive in 23 of 114 patients in the 75-99 kg group and in 25 of 123 patients in the ≥ 100 kg group, respectively (odds ratio, 0.991; 95% confidence interval, 0.501 to 1.957; P=1.0). No pulmonary embolism-related death or venous thromboembolism occurred during follow-up. The mean accuracy of three readers was 91.5% in the 75-99 kg group and 89.9% in the ≥ 100 kg group (odds ratio, 1.207; 95% confidence interval, 0.451 to 3.255; P=0.495), and 89.9% in non-obese patients and 91.2% in obese patients (odds ratio, 0.853; 95% confidence interval, 0.317 to 2.319; P=0.816). CONCLUSION The diagnostic accuracy of CTPA in patients weighing 75-99 kg or 100-150 kg proved not to be significantly different.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
The aim of this study was to test a newly developed LED-based fluorescence device for approximal caries detection in vitro. We assembled 120 extracted molars without frank cavitations or fillings pairwise in order to create contact areas. The teeth were independently assessed by two examiners using visual caries detection (International Caries Detection and Assessment System, ICDAS), bitewing radiography (BW), laser fluorescence (LFpen), and LED fluorescence (Midwest Caries I.D., MW). The measurements were repeated at least 1 week later. The diagnostic performance was calculated with Bayesian analyses. Post-test probabilities were calculated in order to judge the diagnostic performance of combined methods. Reliability analyses were performed using kappa statistics for nominal data and intraclass correlation (ICC) for absolute data. Histology served as the gold standard. Sensitivities/specificities at the enamel threshold were 0.33/0.84 for ICDAS, 0.23/0.86 for BW, 0.47/0.78 for LFpen, and 0.32/0.87 for MW. Sensitivities/specificities at the dentine threshold were 0.04/0.89 for ICDAS, 0.27/0.94 for BW, 0.39/0.84 for LFpen, and 0.07/0.96 for MW. Reliability data were fair to moderate for MW and good for BW and LFpen. The combination of ICDAS and radiography yielded the best diagnostic performance (post-test probability of 0.73 at the dentine threshold). The newly developed LED device is not able to be recommended for approximal caries detection. There might be too much signal loss during signal transduction from the occlusal aspect to the proximal lesion site and the reverse.
Resumo:
Is Benford's law a good instrument to detect fraud in reports of statistical and scientific data? For a valid test the probability of "false positives" and "false negatives" has to be low. However, it is very doubtful whether the Benford distribution is an appropriate tool to discriminate between manipulated and non-manipulated estimates. Further research should focus more on the validity of the test and test results should be interpreted more carefully.