953 resultados para false positive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many zeranol immunoassay test kits cross-react with toxins formed by naturally occurring Fusarium spp. fungi, leading to false-positive screening results. This paper describes the evaluation and application of recently published, dry reagent time-resolved fluoroimmunoassays (TR-FIA) for zeranol and the toxin alpha-zearalenol. A ring test of bovine urine fortified with zeranol and/or alpha-zearalenol in four European Union National Reference Laboratories demonstrated that the TR-FIA tests were accurate and robust. The alpha-zearalenol TR-FIA satisfactorily quantified alpha-zearalenol in urine fortified at 10-30 ng ml(-1) . The specificity-enhanced zeranol TR-FIA accurately quantified zeranol in the range 2-5 ng ml(-1) and gave no false-positive results in blank urine, even in the presence of 30 ng ml(-1) alpha-zearalenol. Zeranol TR-FIA specificity was demonstrated further by analysing incurred zeranol-free urine samples containing natural Fusarium spp. toxins. The TR-FIA yielded no false-positive results in the presence of up to 22 ng ml(-1) toxins. The performance of four commercially available zeranol immunoassay test kits was more variable. Three kits produced many false-positive results. One kit produced only one potential false-positive using a protocol that was longer than that of the TR-FIA. These TR-FIAs will be valuable tools to develop inspection criteria to distinguish illegal zeranol abuse from contamination arising from in vivo metabolism of Fusarium spp. toxins.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this study was to compare time-domain waveform analysis of second-trimester uterine artery Doppler using the resistance index (RI) with waveform analysis using a mathematical tool known as wavelet transform for the prediction of pre-eclampsia (PE). This was a retrospective, nested case-cohort study of 336 women, 37 of whom subsequently developed PE. Uterine artery Doppler waveforms were analysed using both RI and waveform analysis. The utility of these indices in screening for PE was then evaluated using receiver operating characteristic curves. There were significant differences in uterine artery RI between the PE women and those with normal pregnancy outcome. After wavelet analysis, significant difference in the mean amplitude in wavelet frequency band 4 was noted between the 2 groups. The sensitivity for both Doppler RI and frequency band 4 for the detection of PE at a 10% false-positive rate was 45%. This small study demonstrates the application of wavelet transform analysis of uterine artery Doppler waveforms in screening for PE. Further prospective studies are needed in order to clearly define if this analytical approach to waveform analysis may have the potential to improve the detection of PE by uterine artery Doppler screening.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conflicting results have been reported on the detection of paramyxovirus transcripts in Paget's disease, and a possible explanation is differences in the sensitivity of RT-PCR methods for detecting virus. In a blinded study, we found no evidence to suggest that laboratories that failed to detect viral transcripts had less sensitive RT-PCR assays, and we did not detect measles or distemper transcripts in Paget's samples using the most sensitive assays evaluated.

Introduction: There is conflicting evidence on the possible role of persistent paramyxovirus infection in Paget's disease of bone (PDB). Some workers have detected measles virus (MV) or canine distemper virus (CDV) transcripts in cells and tissues from patients with PDB, but others have failed to confirm this finding. A possible explanation might be differences in the sensitivity of RT-PCR methods for detecting virus. Here we performed a blinded comparison of the sensitivity of different RT-PCR-based techniques for MV and CDV detection in different laboratories and used the most sensitive assays to screen for evidence of viral transcripts in bone and blood samples derived from patients with PDB.

Materials and Methods: Participating laboratories analyzed samples spiked with known amounts of MV and CDV transcripts and control samples that did not contain viral nucleic acids. All analyses were performed on a blinded basis.

Results: The limit of detection for CDV was 1000 viral transcripts in three laboratories (Aberdeen, Belfast, and Liverpool) and 10,000 transcripts in another laboratory (Manchester). The limit of detection for MV was 16 transcripts in one laboratory (NIBSC), 1000 transcripts in two laboratories (Aberdeen and Belfast), and 10,000 transcripts in two laboratories (Liverpool and Manchester). An assay previously used by a U.S.-based group to detect MV transcripts in PDB had a sensitivity of 1000 transcripts. One laboratory (Manchester) detected CDV transcripts in a negative control and in two samples that had been spiked with MV. None of the other laboratories had false-positive results for MV or CDV, and no evidence of viral transcripts was found on analysis of 12 PDB samples using the most sensitive RT-PCR assays for MV and CDV.

Conclusions: We found that RT-PCR assays used by different laboratories differed in their sensitivity to detect CDV and MV transcripts but found no evidence to suggest that laboratories that previously failed to detect viral transcripts had less sensitive RT-PCR assays than those that detected viral transcripts. False-positive results were observed with one laboratory, and we failed to detect paramyxovirus transcripts in PDB samples using the most sensitive assays evaluated. Our results show that failure of some laboratories to detect viral transcripts is unlikely to be caused by problems with assay sensitivity and highlight the fact that contamination can be an issue when searching for pathogens by sensitive RT-PCR-based techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To assess the quality of referrals from community optometrists in the northeast of Scotland to the hospital glaucoma service before and after the implementation of the new General Ophthalmic Services (GOS) contract in Scotland. Methods: Retrospective study encompassing two 6-month periods, one before the implementation of the new GOS (Scotland) contract in April 2006 (from June to November 2005), and the other after (from June to November 2006). The community optometrist referral forms and hospital glaucoma service notes were reviewed. Comparisons were performed using the t-test and ?- test. Results: In all, 183 referrals were made during the first 6-month period from June to November 2005, and 120 referrals were made during the second 6-month period from June to November 2006. After the introduction of the new GOS contract, there was a statistically significant increase in true-positive referrals (from 18.0 to 31.7%; P=0.006), decrease in false-positive referrals (from 36.6 to 31.7%; P=0.006), and increase in the number of referrals with information on applanation tonometry (from 11.8 to 50.0%; P=0.000), dilated fundal examination (from 2.2 to 24.2%; P=0.000), and repeat visual fields (from 14.8 to 28.3%; P=0.004) when compared to the first 6-month period. However, only 41.7% of referrals fulfilled the new GOS contract requirements, with information on applanation tonometry the most commonly missing. Conclusions: After the implementation of the new GOS (Scotland) contract in April 2006, there has been an improvement in the quality of the glaucoma referrals from the community optometrists in the northeast of Scotland, with a corresponding reduction in false-positive referrals. Despite the relatively positive effect so far, there is still scope for further improvement. © 2009 Macmillan Publishers Limited All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Base rate neglect on the mammography problem can be overcome by explicitly presenting a causal basis for the typically vague false-positive statistic. One account of this causal facilitation effect is that people make probabilistic judgements over intuitive causal models parameterized with the evidence in the problem. Poorly defined or difficult-to-map evidence interferes with this process, leading to errors in statistical reasoning. To assess whether the construction of parameterized causal representations is an intuitive or deliberative process, in Experiment 1 we combined a secondary load paradigm with manipulations of the presence or absence of an alternative cause in typical statistical reasoning problems. We found limited effects of a secondary load, no evidence that information about an alternative cause improves statistical reasoning, but some evidence that it reduces base rate neglect errors. In Experiments 2 and 3 where we did not impose a load, we observed causal facilitation effects. The amount of Bayesian responding in the causal conditions was impervious to the presence of a load (Experiment 1) and to the precise statistical information that was presented (Experiment 3). However, we found less Bayesian responding in the causal condition than previously reported. We conclude with a discussion of the implications of our findings and the suggestion that there may be population effects in the accuracy of statistical reasoning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To compare the effectiveness of fine needle aspiration cytology (FNAC) with core biopsy (CB) in the pre-operative diagnosis of radial scar (RS) of the breast.

Patients and methods: A retrospective analysis was made of all radial scars diagnosed on surgical histology over an 8-year period. Comparison was made between the results of different preoperative needle biopsy techniques and surgical histology findings.

Results: Forty of 47 patients with a preoperative radiological diagnosis of radial scar were included in this analysis. Thirty-eight patients had impalpable lesions diagnosed on mammography and two presented with a palpable lump. FNAC (n=17) was inadequate in 47% of patients, missed two co-existing carcinomas found in this group, and gave a false positive or suspicious result for malignancy in 4 patients. CB (n=23) suggested a RS in 15 patients, but only diagnosed 4 out of 7 co-existing carcinomas found in this group.

Conclusion: CB is more accurate than FNAC in the diagnosis of RS. However, these data demonstrate that CB may offer little to assist in the management of patients with RS. In summary, this paper advocates the use of CB in any lesion with a radiological suspicion of carcinoma and diagnostic excision of all lesions thought to be typical of RS on mammography.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

European Regulation 1169/2011 requires producers of foods that contain refined vegetable oils to label the oil types. A novel rapid and staged methodology has been developed for the first time to identify common oil species in oil blends. The qualitative method consists of a combination of a Fourier Transform Infrared (FTIR) spectroscopy to profile the oils and fatty acid chromatographic analysis to confirm the composition of the oils when required. Calibration models and specific classification criteria were developed and all data were fused into a simple decision-making system. The single lab validation of the method demonstrated the very good performance (96% correct classification, 100% specificity, 4% false positive rate). Only a small fraction of the samples needed to be confirmed with the majority of oils identified rapidly using only the spectroscopic procedure. The results demonstrate the huge potential of the methodology for a wide range of oil authenticity work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While virtualisation can provide many benefits to a networks infrastructure, securing the virtualised environment is a big challenge. The security of a fully virtualised solution is dependent on the security of each of its underlying components, such as the hypervisor, guest operating systems and storage.

This paper presents a single security service running on the hypervisor that could potentially work to provide security service to all virtual machines running on the system. This paper presents a hypervisor hosted framework which performs specialised security tasks for all underlying virtual machines to protect against any malicious attacks by passively analysing the network traffic of VMs. This framework has been implemented using Xen Server and has been evaluated by detecting a Zeus Server setup and infected clients, distributed over a number of virtual machines. This framework is capable of detecting and identifying all infected VMs with no false positive or false negative detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Despite vaccines and improved medical intensive care, clinicians must continue to be vigilant of possible Meningococcal Disease in children. The objective was to establish if the procalcitonin test was a cost-effective adjunct for prodromal Meningococcal Disease in children presenting at emergency department with fever without source.

METHODS AND FINDINGS: Data to evaluate procalcitonin, C-reactive protein and white cell count tests as indicators of Meningococcal Disease were collected from six independent studies identified through a systematic literature search, applying PRISMA guidelines. The data included 881 children with fever without source in developed countries.The optimal cut-off value for the procalcitonin, C-reactive protein and white cell count tests, each as an indicator of Meningococcal Disease, was determined. Summary Receiver Operator Curve analysis determined the overall diagnostic performance of each test with 95% confidence intervals. A decision analytic model was designed to reflect realistic clinical pathways for a child presenting with fever without source by comparing two diagnostic strategies: standard testing using combined C-reactive protein and white cell count tests compared to standard testing plus procalcitonin test. The costs of each of the four diagnosis groups (true positive, false negative, true negative and false positive) were assessed from a National Health Service payer perspective. The procalcitonin test was more accurate (sensitivity=0.89, 95%CI=0.76-0.96; specificity=0.74, 95%CI=0.4-0.92) for early Meningococcal Disease compared to standard testing alone (sensitivity=0.47, 95%CI=0.32-0.62; specificity=0.8, 95% CI=0.64-0.9). Decision analytic model outcomes indicated that the incremental cost effectiveness ratio for the base case was £-8,137.25 (US $ -13,371.94) per correctly treated patient.

CONCLUSIONS: Procalcitonin plus standard recommended tests, improved the discriminatory ability for fatal Meningococcal Disease and was more cost-effective; it was also a superior biomarker in infants. Further research is recommended for point-of-care procalcitonin testing and Markov modelling to incorporate cost per QALY with a life-time model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With over 50 billion downloads and more than 1.3 million apps in Google’s official market, Android has continued to gain popularity amongst smartphone users worldwide. At the same time there has been a rise in malware targeting the platform, with more recent strains employing highly sophisticated detection avoidance techniques. As traditional signature based methods become less potent in detecting unknown malware, alternatives are needed for timely zero-day discovery. Thus this paper proposes an approach that utilizes ensemble learning for Android malware detection. It combines advantages of static analysis with the efficiency and performance of ensemble machine learning to improve Android malware detection accuracy. The machine learning models are built using a large repository of malware samples and benign apps from a leading antivirus vendor. Experimental results and analysis presented shows that the proposed method which uses a large feature space to leverage the power of ensemble learning is capable of 97.3 % to 99% detection accuracy with very low false positive rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The battle to mitigate Android malware has become more critical with the emergence of new strains incorporating increasingly sophisticated evasion techniques, in turn necessitating more advanced detection capabilities. Hence, in this paper we propose and evaluate a machine learning based approach based on eigenspace analysis for Android malware detection using features derived from static analysis characterization of Android applications. Empirical evaluation with a dataset of real malware and benign samples show that detection rate of over 96% with a very low false positive rate is achievable using the proposed method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a novel method of detecting packed executable files using steganalysis, primarily targeting the detection of obfuscated malware through packing. Considering that over 80% of malware in the wild is packed, detection accuracy and low false negative rates are important properties of malware detection methods. Experimental results outlined in this paper reveal that the proposed approach achieving an overall detection accuracy of greater than 99%, a false negative rate of 1% and a false positive rate of 0%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: To clarify the risk parameters measured by anterior segment optical coherence tomography (AS-OCT) for elevated intraocular pressures (IOP) provoked by the darkroom test and to provide recommendations for its clinical usage. METHODS: Subjects aged >40 years and whose peripheral anterior chambers were ≤1/4 corneal thickness were recruited. The anterior segment of the eye was imaged in sitting position and under both light and dark conditions and biometry was performed using anterior segment optical coherence tomography. The analyzed parameters were: (1) central anterior chamber depth (ACD); (2) anterior chamber width; (3) pupil diameter; (4) iris curvature; (5) lens thickness; and (6) number of meridians with closed angles (NCA). Then the darkroom test was performed and a positive provocative test result was defined as a rise in IOP ≥8 mm Hg after the test. Statistical analyses included: (1) the difference in parameters between positive and negative eyes; (2) the association between posttest IOP and the parameters; and (3) the difference in parameters between the 2 eyes in subjects with the asymmetric results. RESULTS: A total of 70 subjects were recruited. ACD (P=0.022), NCA in light (P<0.001), and NCA in dark (P<0.001) were different significantly between eyes with positive and negative results. There was a strong association between NCA in dark (r=0.755, P<0.001) and the posttest IOP. Among subjects with asymmetric results between the 2 eyes, the ACD was shallower and the lens thickness was larger in the positive eye. CONCLUSIONS: The posttest IOP is determined by the extent of functionally closed angles in the dark. The test may be useful in the early diagnosis of primary angle closure. At the same time, angle configuration should be evaluated to remove false positive result.