988 resultados para Automated Test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L. Blot, A. Davis, M. Holubinka, R. Marti and R. Zwiggelaar, 'Automated quality assurance applied to mammographic imaging', EURASIP Journal of Applied Signal Processing 2002 (7), 736-745 (2002)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Porcine urine enzyme immunoassays for sulfamethazine and sulfadiazine have previously been employed as screening tests to predict the concentrations of the drugs in the corresponding tissues (kidneys), If a urine was found positive (> 800 ng ml(-1)) the corresponding kidney was then analysed by an enzyme immunoassay and, if found positive, a confirmatory analysis by HPLC was performed. Urine was chosen as the screening matrix since sulfonamides are mainly eliminated through this body fluid, However, after obtaining a number of false positive predictions, an investigation was carried out to assess the possibility of using an alternative body fluid which would act as a superior indicator of the presence of sulfonamides in porcine kidney, An initial study indicated that serum, plasma and bile could all be used as screening matrices. From these, bile was chosen as the preferred sample matrix and an extensive study followed to compare the efficiencies of sulfonamide positive bile and urine at predicting sulphonamide positive kidneys, Bile was found to be 17 times more efficient than urine at predicting a sulfamethazine positive kidney and 11 times more efficient at predicting a sulfadiazine positive kidney, With this enhanced performance of the initial screening test, the need for the costly and time consuming kidney enzyme immunoassay, prior to HPLC analysis, was eliminated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel multiplexed immunoassay for the analysis of phycotoxins in shellfish samples has been developed. Therefore, a regenerable chemiluminescence (CL) microarray was established which is able to analyze automatically three different phycotoxins (domoic acid (DA), okadaic acid (OA) and saxitoxin (STX)) in parallel on the analysis platform MCR3. As a test format an indirect competitive immunoassay format was applied. These phycotoxins were directly immobilized on an epoxy-activated PEG chip surface. The parallel analysis was enabled by the simultaneous addition of all analytes and specific antibodies on one microarray chip. After the competitive reaction, the CL signal was recorded by a CCD camera. Due to the ability to regenerate the toxin microarray, internal calibrations of phycotoxins in parallel were performed using the same microarray chip, which was suitable for 25 consecutive measurements. For the three target phycotoxins multi-analyte calibration curves were generated. In extracted shellfish matrix, the determined LODs for DA, OA and STX with values of 0.5±0.3 µg L(-1), 1.0±0.6 µg L(-1), and 0.4±0.2 µg L(-1) were slightly lower than in PBS buffer. For determination of toxin recoveries, the observed signal loss in the regeneration was corrected. After applying mathematical corrections spiked shellfish samples were quantified with recoveries for DA, OA, and STX of 86.2%, 102.5%, and 61.6%, respectively, in 20 min. This is the first demonstration of an antibody based phycotoxin microarray.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coxiella burnetii and members of the genus Rickettsia are obligate intracellular bacteria. Since cultivation of these organisms requires dedicated techniques, their diagnosis usually relies on serological or molecular biology methods. Immunofluorescence is considered the gold standard to detect antibody-reactivity towards these organisms. Here, we assessed the performance of a new automated epifluorescence immunoassay (InoDiag) to detect IgM and IgG against C. burnetii, Rickettsia typhi and Rickettsia conorii. Samples were tested with the InoDiag assay. A total of 213 sera were tested, of which 63 samples from Q fever, 20 from spotted fever rickettsiosis, 6 from murine typhus and 124 controls. InoDiag results were compared to micro-immunofluorescence. For acute Q fever, the sensitivity of phase 2 IgG was only of 30% with a cutoff of 1 arbitrary unit (AU). In patients with acute Q fever with positive IF IgM, sensitivity reached 83% with the same cutoff. Sensitivity for chronic Q fever was 100% whereas sensitivity for past Q fever was 65%. Sensitivity for spotted Mediterranean fever and murine typhus were 91% and 100%, respectively. Both assays exhibited a good specificity in control groups, ranging from 79% in sera from patients with unrelated diseases or EBV positivity to 100% in sera from healthy patients. In conclusion, the InoDiag assay exhibits an excellent performance for the diagnosis of chronic Q fever but a very low IgG sensitivity for acute Q fever likely due to low reactivity of phase 2 antigens present on the glass slide. This defect is partially compensated by the detection of IgM. Because it exhibits a good negative predictive value, the InoDiag assay is valuable to rule out a chronic Q fever. For the diagnosis of rickettsial diseases, the sensitivity of the InoDiag method is similar to conventional immunofluorescence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To assess the impact of a closed-loop electronic prescribing and automated dispensing system on the time spent providing a ward pharmacy service and the activities carried out. Setting Surgical ward, London teaching hospital. Method All data were collected two months pre- and one year post-intervention. First, the ward pharmacist recorded the time taken each day for four weeks. Second, an observational study was conducted over 10 weekdays, using two-dimensional work sampling, to identify the ward pharmacist's activities. Finally, medication orders were examined to identify pharmacists' endorsements that should have been, and were actually, made. Key findings Mean time to provide a weekday ward pharmacy service increased from 1 h 8 min to 1 h 38 min per day (P = 0.001; unpaired t-test). There were significant increases in time spent prescription monitoring, recommending changes in therapy/monitoring, giving advice or information, and non-productive time. There were decreases for supply, looking for charts and checking patients' own drugs. There was an increase in the amount of time spent with medical and pharmacy staff, and with 'self'. Seventy-eight per cent of patients' medication records could be assessed for endorsements pre- and 100% post-intervention. Endorsements were required for 390 (50%) of 787 medication orders pre-intervention and 190 (21%) of 897 afterwards (P < 0.0001; chi-square test). Endorsements were made for 214 (55%) of endorsement opportunities pre-intervention and 57 (30%) afterwards (P < 0.0001; chi-square test). Conclusion The intervention increased the overall time required to provide a ward pharmacy service and changed the types of activity undertaken. Contact time with medical and pharmacy staff increased. There was no significant change in time spent with patients. Fewer pharmacy endorsements were required post-intervention, but a lower percentage were actually made. The findings have important implications for the design, introduction and use of similar systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim. The aim of this study was to investigate whether a single soccer specific fitness test (SSFT) could differentiate between highly trained and recreationally active soccer players in selected test performance indicators. Methods. Subjects: 13 Academy Scholars (AS) from a professional soccer club and 10 Recreational Players (RP) agreed to participate in this study. Test 1-(V)over dotO(2) max was estimated from a progressive shuttle run test to exhaustion. Test 2-The SSFT was controlled by an automated procedure and alternated between walking, sprinting, jogging and cruise running speeds. Three activity blocks (1A, 2A and 3A) were separated by 3 min rest periods in which blood lactate samples were drawn. The 3 blocks of activity (Part A) were followed by 10 min of exercise at speeds alternating between jogging and cruise running (Part B). Results. Estimated (V)over dotO(2) max did not significantly differ between groups, although a trend for a higher aerobic capacity was evident in AS (p<0.09). Exercising heart rates did not differ between AS and RP, however, recovery heart rates taken from the 3 min rest periods were significantly lower in AS compared with RP following blocks 1A (124.65 b(.)min(-1) +/-7.73 and 133.98 b(.)min(-1) +/-6.63), (p<0.05) and 3A (129.91 b.min(-1) +/-10.21 and 138.85 b.min(-1) +/-8.70), (p<0.01). Blood lactate concentrations were significantly elevated in AS in comparison to RP following blocks 2A (6.91 mmol(.)l(-1) +/-2.67 and 4.74 mmol(.)l(-1) +/-1.28) and 3A (7.18 mmol(.)l(-1) +/-2.97 and 4.88 mmol(.)l(-1) +/-1.50), (p<0.05). AS sustained significantly faster average sprint times in block 3A compared with RP (3.18 sec +/-0.12 and 3.31 sec +/-0.12), (p<0.05). Conclusion. The results of this study show that highly trained soccer players are able to sustain, and more quickly recover from, high intensity intermittent exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Användandet av mobila applikationer har växt radikalt de senaste åren och de samverkar med många system. Därför ställs det högre krav på kvaliteten och att applikationen ska anpassas till många olika enheter, operativsystem samt plattformar. Detta gör att test av mobila applikationer blivit viktigare och större. Detta arbete har bedrivits som en jämförande fallstudie inom området test av mobila applikationer samt testverktyg. Syftet har varit att beskriva hur testning av mobila applikationer sker idag vilket gjorts genom litteraturstudier och intervjuer med IT-företag. Ett annat syfte har varit att utvärdera fyra testverktyg, deras för- och nackdelar samt hur de kan användas vid testning av mobila applikationer och jämföras mot manuell testning utan testverktyg. Detta har gjorts genom att skapa förstahandserfarenheter baserat på användandet av testverktygen. Under arbetet har vi utgått från mobila applikationer som vi fått tillgång till av Triona, som varit vår samarbetspartner.Idag finns många olika testverktyg som kan användas som stöd för testningen men få företag har implementerat något eftersom det kräver både tid och kompetens samt valet av testverktyg kan vara svårt. Testverktygen har olika för- och nackdelar vilket gör att de passar olika bra beroende på typ av projekt och applikation. Fördelar med att använda testverktyg är möjligheten att kunna automatisera, testa på flera enheter samtidigt samt få tillgång till enheter via molnet. Utmaningarna är att det kan vara svårt att installera och lära sig testverktyget samt att licenserna kan vara dyra. Det är därför viktigt att redan innan implementationen veta vilka tester och applikationer testverktygen ska användas till samt vem som ska använda det. Utifrån vår studie kan slutsatsen dras att inget testverktyg är helt komplett men de kan bidra med olika funktioner vilket effektiviserar delar av testningen av mobila applikationer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To describe the diagnostic performance of SolarScan (Polartechnics Ltd, Sydney, Australia), an automated instrument for the diagnosis of primary melanoma.

Design Images from a data set of 2430 lesions (382 were melanomas; median Breslow thickness, 0.36 mm) were divided into a training set and an independent test set at a ratio of approximately 2:1. A diagnostic algorithm (absolute diagnosis of melanoma vs benign lesion and estimated probability of melanoma) was developed and its performance described on the test set. High-quality clinical and dermoscopy images with a detailed patient history for 78 lesions (13 of which were melanomas) from the test set were given to various clinicians to compare their diagnostic accuracy with that of SolarScan.

Setting Seven specialist referral centers and 2 general practice skin cancer clinics from 3 continents. Comparison between clinician diagnosis and SolarScan diagnosis was by 3 dermoscopy experts, 4 dermatologists, 3 trainee dermatologists, and 3 general practitioners.

Patients Images of the melanocytic lesions were obtained from patients who required either excision or digital monitoring to exclude malignancy.

Main Outcome Measures Sensitivity, specificity, the area under the receiver operator characteristic curve, median probability for the diagnosis of melanoma, a direct comparison of SolarScan with diagnoses performed by humans, and interinstrument and intrainstrument reproducibility.

Results The melanocytic-only diagnostic model was highly reproducible in the test set and gave a sensitivity of 91% (95% confidence interval [CI], 86%-96%) and specificity of 68% (95% CI, 64%-72%) for melanoma. SolarScan had comparable or superior sensitivity and specificity (85% vs 65%) compared with those of experts (90% vs 59%), dermatologists (81% vs 60%), trainees (85% vs 36%; P =.06), and general practitioners (62% vs 63%). The intraclass correlation coefficient of intrainstrument repeatability was 0.86 (95% CI, 0.83-0.88), indicating an excellent repeatability. There was no significant interinstrument variation (P = .80).

Conclusions SolarScan is a robust diagnostic instrument for pigmented or partially pigmented melanocytic lesions of the skin. Preliminary data suggest that its performance is comparable or superior to that of a range of clinician groups. However, these findings should be confirmed in a formal clinical trial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A system that can automatically detect nodules within lung images may assist expert radiologists in interpreting the abnormal patterns as nodules in 2D CT lung images. A system is presented that can automatically identify nodules of various sizes within lung images. The pattern classification method is employed to develop the proposed system. A random forest ensemble classifier is formed consisting of many weak learners that can grow decision trees. The forest selects the decision that has the most votes. The developed system consists of two random forest classifiers connected in a series fashion. A subset of CT lung images from the LIDC database is employed. It consists of 5721 images to train and test the system. There are 411 images that contained expert- radiologists identified nodules. Training sets consisting of nodule, non-nodule, and false-detection patterns are constructed. A collection of test images are also built. The first classifier is developed to detect all nodules. The second classifier is developed to eliminate the false detections produced by the first classifier. According to the experimental results, a true positive rate of 100%, and false positive rate of 1.4 per lung image are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To compare frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) for detecting glaucomatous visual field progression in a longitudinal prospective study.

METHODS. One eye of patients with open-angle glaucoma was tested every 6 months with both FDT and SAP. A minimum of 6 examinations with each perimetric technique was required for inclusion. Visual field progression was determined by two methods: glaucoma change probability (GCP) analysis and linear regression analysis (LRA). For GCP, several criteria for progression were used. The number of locations required to classify progression with FDT compared with SAP, respectively, was 1:2 (least conservative), 1:3, 2:3, 2:4, 2:6, 2:7, 3:6, 3:7, and 3:10 (most conservative). The number of consecutive examinations required to confirm progression was 2-of-3, 2-of-2, and 3-of-3. For LRA, the progression criterion was any significant decline in mean threshold sensitivity over time in each of the following three visual field subdivisions: (1) all test locations, (2) locations in the central 10° and the superior and inferior hemifields, and (3) locations in each quadrant. Using these criteria, the proportion of patients classified as showing progression with each perimetric technique was calculated and, in the case of progression with both, the differences in time to progression were determined.

RESULTS. Sixty-five patients were followed for a median of 3.5 years (median number of examinations, 9). For the least conservative GCP criterion, 32 (49%) patients were found to have progressing visual fields with FDT and 32 (49%) patients with SAP. Only 16 (25%) patients showed progression with both methods, and in most of those patients, FDT identified progression before SAP (median, 12 months earlier). The majority of GCP progression criteria (15/27), classified more patients as showing progression with FDT than with SAP. Contrary to this, more patients showed progression with SAP than FDT, when analysed with LRA; e.g., using quadrant LRA 20 (31%) patients showed progression with FDT, 23 (35%) with SAP, and only 10 (15%) with both.

CONCLUSIONS. FDT perimetry detected glaucomatous visual field progression. However, the proportion of patients who showed progression with both FDT and SAP was small, possibly indicating that the two techniques identify different subgroups of patients. Using GCP, more patients showed progression with FDT than with SAP, yet the opposite occurred using LRA. As there is no independent qualifier of progression, FDT and SAP progression rates vary depending on the method of analysis and the criterion used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirements validation is a crucial process to determine whether client-stakeholders' needs and expectations of a product are sufficiently correct and complete. Various requirements validation techniques have been used to evaluate the correctness and quality of requirements, but most of these techniques are tedious, expensive and time consuming. Accordingly, most project members are reluctant to invest their time and efforts in the requirements validation process. Moreover, automated tool supports that promote effective collaboration between the client-stakeholders and the engineers are still lacking. In this paper, we describe a novel approach that combines prototyping and test-based requirements techniques to improve the requirements validation process and promote better communication and collaboration between requirements engineers and clientstakeholders. To justify the potential of this prototype tool, we also present three types of evaluation conducted on the prototpye tool, which are the usability survey, 3-tool comparison analysis and expert reviews.