14 resultados para Automatic Image Annotation

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: Fewer than 50% of adults and 40% of youth meet US CDC guidelines for physical activity (PA) with the built environment (BE) a culprit for limited PA. A challenge in evaluating policy and BE change is the forethought to capture a priori PA behaviors and the ability to eliminate bias in post-change environments. The present objective was to analyze existing public data feeds to quantify effectiveness of BE interventions. The Archive of Many Outdoor Scenes (AMOS) has collected 135 million images of outdoor environments from 12,000 webcams since 2006. Many of these environments have experienced BE change. Methods: One example of BE change is the addition of protected bike lanes and a bike share program in Washington, DC.Weselected an AMOS webcam that captured this change. AMOS captures a photograph from eachwebcamevery half hour.AMOScaptured the 120 webcam photographs between 0700 and 1900 during the first work week of June 2009 and the 120 photographs from the same week in 2010. We used the Amazon Mechanical Turk (MTurk) website to crowd-source the image annotation. MTurk workers were paid US$0.01 to mark each pedestrian, cyclist and vehicle in a photograph. Each image was coded 5 unique times (n=1200). The data, counts of transportation mode, was downloaded to SPSS for analysis. Results: The number of cyclists per scene increased four-fold between 2009 and 2010 (F=36.72, p=0.002). There was no significant increase in pedestrians between the two years, however there was a significant increase in number of vehicles per scene (F=16.81, p

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: Molecular pathology relies on identifying anomalies using PCR or analysis of DNA/RNA. This is important in solid tumours where molecular stratification of patients define targeted treatment. These molecular biomarkers rely on examination of tumour, annotation for possible macro dissection/tumour cell enrichment and the estimation of % tumour. Manually marking up tumour is error prone. Method: We have developed a method for automated tumour mark-up and % cell calculations using image analysis called TissueMark® based on texture analysis for lung, colorectal and breast (cases=245, 100, 100 respectively). Pathologists marked slides for tumour and reviewed the automated analysis. A subset of slides was manually counted for tumour cells to provide a benchmark for automated image analysis. Results: There was a strong concordance between pathological and automated mark-up (100 % acceptance rate for macro-dissection). We also showed a strong concordance between manually/automatic drawn boundaries (median exclusion/inclusion error of 91.70 %/89 %). EGFR mutation analysis was precisely the same for manual and automated annotation-based macrodissection. The annotation accuracy rates in breast and colorectal cancer were 83 and 80 % respectively. Finally, region-based estimations of tumour percentage using image analysis showed significant correlation with actual cell counts. Conclusion: Image analysis can be used for macro-dissection to (i) annotate tissue for tumour and (ii) estimate the % tumour cells and represents an approach to standardising/improving molecular diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel, fast automatic motion segmentation approach is presented. It differs from conventional pixel or edge based motion segmentation approaches in that the proposed method uses labelled regions (facets) to segment various video objects from the background. Facets are clustered into objects based on their motion and proximity details using Bayesian logic. Because the number of facets is usually much lower than the number of edges and points, using facets can greatly reduce the computational complexity of motion segmentation. The proposed method can tackle efficiently the complexity of video object motion tracking, and offers potential for real-time content-based video annotation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Color segmentation of images usually requires a manual selection and classification of samples to train the system. This paper presents an automatic system that performs these tasks without the need of a long training, providing a useful tool to detect and identify figures. In real situations, it is necessary to repeat the training process if light conditions change, or if, in the same scenario, the colors of the figures and the background may have changed, being useful a fast training method. A direct application of this method is the detection and identification of football players.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Barrett's oesophagus (BO) is a well recognized precursor of the majority of cases of oesophageal adenocarcinoma (OAC). Endoscopic surveillance of BO patients is frequently undertaken in an attempt to detect early OAC, high grade dysplasia (HGD) or low grade dysplasia (LGD). However histological interpretation and grading of dysplasia is subjective and poorly reproducible. The alternative flow cytometry and cytology-preparation image cytometry techniques require large amounts of tissue and specialist expertise which are not widely available for frontline health care.
Methods: This study has combined whole slide imaging with DNA image cytometry, to provide a novel method for the detection and quantification of abnormal DNA contents. 20 cases were evaluated, including 8 Barrett's specialised intestinal metaplasia (SIM), 6 LGD and 6 HGD. Feulgen stained oesophageal sections (1µm thickness) were digitally scanned in their entirety and evaluated to select regions of interests and abnormalities. Barrett’s mucosa was then interactively chosen for automatic nuclei segmentation where irrelevant cell types are ignored. The combined DNA content histogram for all selected image regions was then obtained. In addition, histogram measurements, including 5c exceeding ratio (xER-5C), 2c deviation index (2cDI) and DNA grade of malignancy (DNA-MG), were computed.
Results: The histogram measurements, xER-5C, 2cDI and DNA-MG, were shown to be effective in differentiating SIM from HGD, SIM from LGD, and LGD from HGD. All three measurements discriminated SIM from HGD cases successfully with statistical significance (pxER-5C=0.0041, p2cDI=0.0151 and pDNA-MG=0.0057). Statistical significance is also achieved differentiating SIM from LGD samples with pxER-5C=0.0019, p2cDI=0.0023 and pDNA-MG=0.0030. Furthermore the differences between LGD and HGD cases are statistical significant (pxER-5C=0.0289, p2cDI=0.0486 and pDNA-MG=0.0384).
Conclusion: Whole slide image cytometry is a novel and effective method for the detection and quantification of abnormal DNA content in BO. Compared to manual histological review, this proposed method is more objective and reproducible. Compared to flow cytometry and cytology-preparation image cytometry, the current method is low cost, simple to use and only requires a single 1µm tissue section. Whole slide image cytometry could assist the routine clinical diagnosis of dysplasia in BO, which is relevant for future progression risk to OAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a novel automated glaucoma detection framework for mass-screening that operates on inexpensive retinal cameras. The proposed methodology is based on the assumption that discriminative features for glaucoma diagnosis can be extracted from the optical nerve head structures,
such as the cup-to-disc ratio or the neuro-retinal rim variation. After automatically segmenting the cup and optical disc, these features are feed into a machine learning classifier. Experiments were performed using two different datasets and from the obtained results the proposed technique provides
better performance than approaches based on appearance. A main advantage of our approach is that it only requires a few training samples to provide high accuracy over several different glaucoma stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.