985 resultados para DETECTION PROBABILITY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Smear-negative pulmonary tuberculosis (SNPTB) accounts for 30% of Pulmonary Tuberculosis (PTB) cases reported annually in developing nations. Polymerase chain reaction (PCR) may provide an alternative for the rapid detection of Mycobacterium tuberculosis (MTB); however little data are available regarding the clinical utility of PCR in SNPTB, in a setting with a high burden of TB/HIV co-infection. Methods To evaluate the performance of the PCR dot-blot in parallel with pretest probability (Clinical Suspicion) in patients suspected of having SNPTB, a prospective study of 213 individuals with clinical and radiological suspicion of SNPTB was carried out from May 2003 to May 2004, in a TB/HIV reference hospital. Respiratory specialists estimated the pretest probability of active disease into high, intermediate, low categories. Expectorated sputum was examined by direct microscopy (Ziehl-Neelsen staining), culture (Lowenstein Jensen) and PCR dot-blot. Gold standard was based on culture positivity combined with the clinical definition of PTB. Results In smear-negative and HIV subjects, active PTB was diagnosed in 28.4% (43/151) and 42.2% (19/45), respectively. In the high, intermediate and low pretest probability categories active PTB was diagnosed in 67.4% (31/46), 24% (6/25), 7.5% (6/80), respectively. PCR had sensitivity of 65% (CI 95%: 50%–78%) and specificity of 83% (CI 95%: 75%–89%). There was no difference in the sensitivity of PCR in relation to HIV status. PCR sensitivity and specificity among non-previously TB treated and those treated in the past were, respectively: 69%, 43%, 85% and 80%. The high pretest probability, when used as a diagnostic test, had sensitivity of 72% (CI 95%:57%–84%) and specificity of 86% (CI 95%:78%–92%). Using the PCR dot-blot in parallel with high pretest probability as a diagnostic test, sensitivity, specificity, positive and negative predictive values were: 90%, 71%, 75%, and 88%, respectively. Among non-previously TB treated and HIV subjects, this approach had sensitivity, specificity, positive and negative predictive values of 91%, 79%, 81%, 90%, and 90%, 65%, 72%, 88%, respectively. Conclusion PCR dot-blot associated with a high clinical suspicion may provide an important contribution to the diagnosis of SNPTB mainly in patients that have not been previously treated attended at a TB/HIV reference hospital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE We prospectively assessed the diagnostic accuracy of diffusion-weighted magnetic resonance imaging for detecting significant prostate cancer. MATERIALS AND METHODS We performed a prospective study of 111 consecutive men with prostate and/or bladder cancer who underwent 3 Tesla diffusion-weighted magnetic resonance imaging of the pelvis without an endorectal coil before radical prostatectomy (78) or cystoprostatectomy (33). Three independent readers blinded to clinical and pathological data assigned a prostate cancer suspicion grade based on qualitative imaging analysis. Final pathology results of prostates with and without cancer served as the reference standard. Primary outcomes were the sensitivity and specificity of diffusion-weighted magnetic resonance imaging for detecting significant prostate cancer with significance defined as a largest diameter of the index lesion of 1 cm or greater, extraprostatic extension, or Gleason score 7 or greater on final pathology assessment. Secondary outcomes were interreader agreement assessed by the Fleiss κ coefficient and image reading time. RESULTS Of the 111 patients 93 had prostate cancer, which was significant in 80 and insignificant in 13, and 18 had no prostate cancer on final pathology results. The sensitivity and specificity of diffusion-weighted magnetic resonance imaging for detecting significant PCa was 89% to 91% and 77% to 81%, respectively, for the 3 readers. Interreader agreement was good (Fleiss κ 0.65 to 0.74). Median reading time was between 13 and 18 minutes. CONCLUSIONS Diffusion-weighted magnetic resonance imaging (3 Tesla) is a noninvasive technique that allows for the detection of significant prostate cancer with high probability without contrast medium or an endorectal coil, and with good interreader agreement and a short reading time. This technique should be further evaluated as a tool to stratify patients with prostate cancer for individualized treatment options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE In contrast to conventional breast imaging techniques, one major diagnostic benefit of breast magnetic resonance imaging (MRI) is the simultaneous acquisition of morphologic and dynamic enhancement characteristics, which are based on angiogenesis and therefore provide insights into tumor pathophysiology. The aim of this investigation was to intraindividually compare 2 macrocyclic MRI contrast agents, with low risk for nephrogenic systemic fibrosis, in the morphologic and dynamic characterization of histologically verified mass breast lesions, analyzed by blinded human evaluation and a fully automatic computer-assisted diagnosis (CAD) technique. MATERIALS AND METHODS Institutional review board approval and patient informed consent were obtained. In this prospective, single-center study, 45 women with 51 histopathologically verified (41 malignant, 10 benign) mass lesions underwent 2 identical examinations at 1.5 T (mean time interval, 2.1 days) with 0.1-mmol kg doses of gadoteric acid and gadobutrol. All magnetic resonance images were visually evaluated by 2 experienced, blinded breast radiologists in consensus and by an automatic CAD system, whereas the morphologic and dynamic characterization as well as the final human classification of lesions were performed based on the categories of the Breast imaging reporting and data system MRI atlas. Lesions were also classified by defining their probability of malignancy (morpho-dynamic index; 0%-100%) by the CAD system. Imaging results were correlated with histopathology as gold standard. RESULTS The CAD system coded 49 of 51 lesions with gadoteric acid and gadobutrol (detection rate, 96.1%); initial signal increase was significantly higher for gadobutrol than for gadoteric acid for all and the malignant coded lesions (P < 0.05). Gadoteric acid resulted in more postinitial washout curves and fewer continuous increases of all and the malignant lesions compared with gadobutrol (CAD hot spot regions, P < 0.05). Morphologically, the margins of the malignancies were different between the 2 agents, whereas gadobutrol demonstrated more spiculated and fewer smooth margins (P < 0.05). Lesion classifications by the human observers and by the morpho-dynamic index compared with the histopathologic results did not significantly differ between gadoteric acid and gadobutrol. CONCLUSIONS Macrocyclic contrast media can be reliably used for breast dynamic contrast-enhanced MRI. However, gadoteric acid and gadobutrol differed in some dynamic and morphologic characterization of histologically verified breast lesions in an intraindividual, comparison. Besides the standardization of technical parameters and imaging evaluation of breast MRI, the standardization of the applied contrast medium seems to be important to receive best comparable MRI interpretation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The accuracy of CT pulmonary angiography (CTPA) in detecting or excluding pulmonary embolism has not yet been assessed in patients with high body weight (BW). METHODS This retrospective study involved CTPAs of 114 patients weighing 75-99 kg and those of 123 consecutive patients weighing 100-150 kg. Three independent blinded radiologists analyzed all examinations in randomized order. Readers' data on pulmonary emboli were compared with a composite reference standard, comprising clinical probability, reference CTPA result, additional imaging when performed and 90-day follow-up. Results in both BW groups and in two body mass index (BMI) groups (BMI <30 kg/m(2) and BMI ≥ 30 kg/m(2), i.e., non-obese and obese patients) were compared. RESULTS The prevalence of pulmonary embolism was not significantly different in the BW groups (P=1.0). The reference CTPA result was positive in 23 of 114 patients in the 75-99 kg group and in 25 of 123 patients in the ≥ 100 kg group, respectively (odds ratio, 0.991; 95% confidence interval, 0.501 to 1.957; P=1.0). No pulmonary embolism-related death or venous thromboembolism occurred during follow-up. The mean accuracy of three readers was 91.5% in the 75-99 kg group and 89.9% in the ≥ 100 kg group (odds ratio, 1.207; 95% confidence interval, 0.451 to 3.255; P=0.495), and 89.9% in non-obese patients and 91.2% in obese patients (odds ratio, 0.853; 95% confidence interval, 0.317 to 2.319; P=0.816). CONCLUSION The diagnostic accuracy of CTPA in patients weighing 75-99 kg or 100-150 kg proved not to be significantly different.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to test a newly developed LED-based fluorescence device for approximal caries detection in vitro. We assembled 120 extracted molars without frank cavitations or fillings pairwise in order to create contact areas. The teeth were independently assessed by two examiners using visual caries detection (International Caries Detection and Assessment System, ICDAS), bitewing radiography (BW), laser fluorescence (LFpen), and LED fluorescence (Midwest Caries I.D., MW). The measurements were repeated at least 1 week later. The diagnostic performance was calculated with Bayesian analyses. Post-test probabilities were calculated in order to judge the diagnostic performance of combined methods. Reliability analyses were performed using kappa statistics for nominal data and intraclass correlation (ICC) for absolute data. Histology served as the gold standard. Sensitivities/specificities at the enamel threshold were 0.33/0.84 for ICDAS, 0.23/0.86 for BW, 0.47/0.78 for LFpen, and 0.32/0.87 for MW. Sensitivities/specificities at the dentine threshold were 0.04/0.89 for ICDAS, 0.27/0.94 for BW, 0.39/0.84 for LFpen, and 0.07/0.96 for MW. Reliability data were fair to moderate for MW and good for BW and LFpen. The combination of ICDAS and radiography yielded the best diagnostic performance (post-test probability of 0.73 at the dentine threshold). The newly developed LED device is not able to be recommended for approximal caries detection. There might be too much signal loss during signal transduction from the occlusal aspect to the proximal lesion site and the reverse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cause of infection of about a third of all travelers' diarrhea patients studied is not identified. Stools of these diarrhea patients tested for known enteric pathogens are shown to be negative, and identified as pathogen negative stools. We proposed that the third of these diarrhea patients might not only include at present unknown pathogens, but also known pathogens that go undetected. Conventionally, a probability sample of five E. coli colonies are used detect enterotoxigenic E. coli (ETEC) and other diarrhea-producing E. coli from stool cultures. We compared this conventional method of using five E. coli colonies, to the use of up to twenty E. coli colonies. Testing for up to fifteen E. coli colonies detected about twice as many ETEC when compared to the detection of ETEC, testing for five E. coli colonies. When the number of E. coli colonies tested was increased from 5 to 15, the detection of ETEC increased from 19.0% to 38.8%. The sensitivity of the assay with 5 E. coli colonies was statistically significantly different to the sensitivity of the assay with 10 E. coli colonies, suggesting that for the detection of ETEC at least 10 colonies of E. coli should be tested.^