300 resultados para annotated image database
Resumo:
Highly sensitive infrared cameras can produce high-resolution diagnostic images of the temperature and vascular changes of breasts. Wavelet transform based features are suitable in extracting the texture difference information of these images due to their scale-space decomposition. The objective of this study is to investigate the potential of extracted features in differentiating between breast lesions by comparing the two corresponding pectoral regions of two breast thermograms. The pectoral regions of breastsare important because near 50% of all breast cancer is located in this region. In this study, the pectoral region of the left breast is selected. Then the corresponding pectoral region of the right breast is identified. Texture features based on the first and the second sets of statistics are extracted from wavelet decomposed images of the pectoral regions of two breast thermograms. Principal component analysis is used to reduce dimension and an Adaboost classifier to evaluate classification performance. A number of different wavelet features are compared and it is shown that complex non-separable 2D discrete wavelet transform features perform better than their real separable counterparts.
Resumo:
Microvessel density (MVD) is a widely used surrogate measure of angiogenesis in pathological specimens and tumour models. Measurement of MVD can be achieved by several methods. Automation of counting methods aims to increase the speed, reliability and reproducibility of these techniques. The image analysis system described here enables MVD measurement to be carried out with minimal expense in any reasonably equipped pathology department or laboratory. It is demonstrated that the system translates easily between tumour types which are suitably stained with minimal calibration. The aim of this paper is to offer this technique to a wider field of researchers in angiogenesis.
Resumo:
There are several methods for determining the proteoglycan content of cartilage in biomechanics experiments. Many of these include assay-based methods and the histochemistry or spectrophotometry protocol where quantification is biochemically determined. More recently a method based on extracting data to quantify proteoglycan content has emerged using the image processing algorithms, e.g., in ImageJ, to process histological micrographs, with advantages including time saving and low cost. However, it is unknown whether or not this image analysis method produces results that are comparable to those obtained from the biochemical methodology. This paper compares the results of a well-established chemical method to those obtained using image analysis to determine the proteoglycan content of visually normal (n=33) and their progressively degraded counterparts with the protocols. The results reveal a strong linear relationship with a regression coefficient (R2) = 0.9928, leading to the conclusion that the image analysis methodology is a viable alternative to the spectrophotometry.
De Novo Transcriptome Sequence Assembly and Analysis of RNA Silencing Genes of Nicotiana benthamiana
Resumo:
Background: Nicotiana benthamiana has been widely used for transient gene expression assays and as a model plant in the study of plant-microbe interactions, lipid engineering and RNA silencing pathways. Assembling the sequence of its transcriptome provides information that, in conjunction with the genome sequence, will facilitate gaining insight into the plant's capacity for high-level transient transgene expression, generation of mobile gene silencing signals, and hyper-susceptibility to viral infection. Methodology/Results: RNA-seq libraries from 9 different tissues were deep sequenced and assembled, de novo, into a representation of the transcriptome. The assembly, of16GB of sequence, yielded 237,340 contigs, clustering into 119,014 transcripts (unigenes). Between 80 and 85% of reads from all tissues could be mapped back to the full transcriptome. Approximately 63% of the unigenes exhibited a match to the Solgenomics tomato predicted proteins database. Approximately 94% of the Solgenomics N. benthamiana unigene set (16,024 sequences) matched our unigene set (119,014 sequences). Using homology searches we identified 31 homologues that are involved in RNAi-associated pathways in Arabidopsis thaliana, and show that they possess the domains characteristic of these proteins. Of these genes, the RNA dependent RNA polymerase gene, Rdr1, is transcribed but has a 72 nt insertion in exon1 that would cause premature termination of translation. Dicer-like 3 (DCL3) appears to lack both the DEAD helicase motif and second dsRNA binding motif, and DCL2 and AGO4b have unexpectedly high levels of transcription. Conclusions: The assembled and annotated representation of the transcriptome and list of RNAi-associated sequences are accessible at www.benthgenome.com alongside a draft genome assembly. These genomic resources will be very useful for further study of the developmental, metabolic and defense pathways of N. benthamiana and in understanding the mechanisms behind the features which have made it such a well-used model plant. © 2013 Nakasugi et al.
Resumo:
Background Foot ulcers are a leading cause of avoidable hospital admissions and lower extremity amputations. However, large clinical studies describing foot ulcer presentations in the ambulatory setting are limited. The aim of this descriptive observational paper is to report the characteristics of ambulatory foot ulcer patients managed across 13 of 17 Queensland Health & Hospital Services. Methods Data on all foot ulcer patients registered with a Queensland High Risk Foot Form (QHRFF) was collected at their first consult in 2012. Data is automatically extracted from each QHRFF into a Queensland high risk foot database. Descriptive statistics display age, sex, ulcer types and co-morbidities. Statewide clinical indicators of foot ulcer management are also reported. Results Overall, 2,034 people presented with a foot ulcer in 2012. Mean age was 63(±14) years and 67.8% were male. Co-morbidities included 85% had diabetes, 49.7% hypertension, 39.2% dyslipidaemia, 25.6% cardiovascular disease, 13.7% kidney disease and 12.2% smoking. Foot ulcer types included 51.6% neuropathic, 17.8% neuro-ischaemic, 7.2% ischaemic, 6.6% post-surgical and 16.8% other; whilst 31% were infected. Clinical indicator results revealed 98% had their wound categorised, 51% received non-removable offloading, median ulcer healing time was 6-weeks and 37% had ulcer recurrence. Conclusion This paper details the largest foot ulcer database reported in Australia. People presenting with foot ulcers appear predominantly older, male with several co-morbidities. Encouragingly it appears most patients are receiving best practice care. These results may be a factor in the significant reduction of Queensland diabetes foot-related hospitalisations and amputations recently reported.
Resumo:
Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.
Resumo:
This thesis introduces improved techniques towards automatically estimating the pose of humans from video. It examines a complete workflow to estimating pose, from the segmentation of the raw video stream to extract silhouettes, to using the silhouettes in order to determine the relative orientation of parts of the human body. The proposed segmentation algorithms have improved performance and reduced complexity, while the pose estimation shows superior accuracy during difficult cases of self occlusion.
Resumo:
In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision-based localisation approaches. In this paper we describe how to convert a colour image of the scene to a greyscale invariant image where pixel values are a function of underlying material property not lighting. We summarise the theory of shadow invariant images and discuss the modelling and calibration issues which are important for non-ideal off-the-shelf colour cameras. We evaluate the technique with a commonly used robotic camera and an autonomous car operating in an outdoor environment, and show that it can outperform the use of ordinary greyscale images for the task of visual localisation.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
The Australian Curriculum: English (AC:E) is being implemented in Queensland and asks teachers and curriculum designers to incorporate the cross curriculum priority of Sustainability. This paper examines some texts suitable for inclusion in classroom study and suggests some companion texts that may be studied alongside them, including online resources by the ABC and those developed online for the Australian Curriculum. We also suggest some formative and summative assessment possibilities for responding to the selected works in this guide. We have endeavoured to investigate literature that enable students to explore and produce text types across the three AC:E categories: persuasive, imaginative and informative. The selected texts cover traditional novels, novellas, Sci-fi and speculative fiction, non-fiction, documentary, feature film and animation. Some of the texts reviewed here also cover the other cross curriculum priorities including texts by Aboriginal and Torres Strait Islander writers and some which also include Asian representations. We have also indicated which of the AC:E the general capabilities are addressed in each text.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.