850 resultados para Text-Based Image Retrieval
Resumo:
Visual information is becoming increasingly important and tools to manage repositories of media collections are highly sought after. In this paper, we focus on image databases and on how to effectively and efficiently access these. In particular, we present effective image browsing systems that are operated on a large multi-touch environment for truly interactive exploration. Not only do image browsers pose a useful alternative to retrieval-based systems, they also provide a visualisation of the whole image collection and let users explore particular parts of the collection. Our systems are based on the idea that visually similar images are located close to each other in the visualisation, that image thumbnails are arranged on a regular lattice (either a regular grid projected on a sphere or a hexagonal lattice), and that large image datasets can be accessed through a hierarchical tree structure. © 2014 International Information Institute.
Resumo:
Most research in the area of emotion detection in written text focused on detecting explicit expressions of emotions in text. In this paper, we present a rule-based pipeline approach for detecting implicit emotions in written text without emotion-bearing words based on the OCC Model. We have evaluated our approach on three different datasets with five emotion categories. Our results show that the proposed approach outperforms the lexicon matching method consistently across all the three datasets by a large margin of 17–30% in F-measure and gives competitive performance compared to a supervised classifier. In particular, when dealing with formal text which follows grammatical rules strictly, our approach gives an average F-measure of 82.7% on “Happy”, “Angry-Disgust” and “Sad”, even outperforming the supervised baseline by nearly 17% in F-measure. Our preliminary results show the feasibility of the approach for the task of implicit emotion detection in written text.
Resumo:
Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interests. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest on object and the interest on the link structure of objects. Experiments with news-scale text data show that the interest on object and the interest on link structure have real requirements, and it is effective to recommend texts according to the angles.
Resumo:
Image collections are ever growing and hence efficient and effective tools to manage these repositories are highly sought after. In this paper, we present effective image browsing systems that are operated on a large multi-touch environment for truly interactive exploration. Not only do image browsers pose a useful alternative to retrieval-based systems, they also provide a visualisation of the whole image collection and allow users to interactively explore particular parts of the collection. Our systems are based on the idea that visually similar images are located close to each other in the visualisation, that image thumbnails are arranged on a regular lattice (either a regular grid projected onto a sphere or a hexagonal lattice), and that large image datasets can be accessed through a hierarchical tree structure. A pilot study has shown that the presented systems do indeed work well and are preferred compared to conventional image browsers. © 2011 IEEE.
Resumo:
We propose a novel template matching approach for the discrimination of handwritten and machine-printed text. We first pre-process the scanned document images by performing denoising, circles/lines exclusion and word-block level segmentation. We then align and match characters in a flexible sized gallery with the segmented regions, using parallelised normalised cross-correlation. The experimental results over the Pattern Recognition & Image Analysis Research Lab-Natural History Museum (PRImA-NHM) dataset show remarkably high robustness of the algorithm in classifying cluttered, occluded and noisy samples, in addition to those with significant high missing data. The algorithm, which gives 84.0% classification rate with false positive rate 0.16 over the dataset, does not require training samples and generates compelling results as opposed to the training-based approaches, which have used the same benchmark.
Resumo:
The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Resumo:
With the proliferation of multimedia data and ever-growing requests for multimedia applications, there is an increasing need for efficient and effective indexing, storage and retrieval of multimedia data, such as graphics, images, animation, video, audio and text. Due to the special characteristics of the multimedia data, the Multimedia Database management Systems (MMDBMSs) have emerged and attracted great research attention in recent years. Though much research effort has been devoted to this area, it is still far from maturity and there exist many open issues. In this dissertation, with the focus of addressing three of the essential challenges in developing the MMDBMS, namely, semantic gap, perception subjectivity and data organization, a systematic and integrated framework is proposed with video database and image database serving as the testbed. In particular, the framework addresses these challenges separately yet coherently from three main aspects of a MMDBMS: multimedia data representation, indexing and retrieval. In terms of multimedia data representation, the key to address the semantic gap issue is to intelligently and automatically model the mid-level representation and/or semi-semantic descriptors besides the extraction of the low-level media features. The data organization challenge is mainly addressed by the aspect of media indexing where various levels of indexing are required to support the diverse query requirements. In particular, the focus of this study is to facilitate the high-level video indexing by proposing a multimodal event mining framework associated with temporal knowledge discovery approaches. With respect to the perception subjectivity issue, advanced techniques are proposed to support users' interaction and to effectively model users' perception from the feedback at both the image-level and object-level.
Resumo:
Fluorescence-enhanced optical imaging is an emerging non-invasive and non-ionizing modality towards breast cancer diagnosis. Various optical imaging systems are currently available, although most of them are limited by bulky instrumentation, or their inability to flexibly image different tissue volumes and shapes. Hand-held based optical imaging systems are a recent development for its improved portability, but are currently limited only to surface mapping. Herein, a novel optical imager, consisting primarily of a hand-held probe and a gain-modulated intensified charge coupled device (ICCD) detector, is developed towards both surface and tomographic breast imaging. The unique features of this hand-held probe based optical imager are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) reduce overall imaging time using a unique measurement geometry, and (iii) perform tomographic imaging for tumor three-dimensional (3-D) localization. Frequency-domain based experimental phantom studies have been performed on slab geometries (650 ml) under different target depths (1-2.5 cm), target volumes (0.45, 0.23 and 0.10 cc), fluorescence absorption contrast ratios (1:0, 1000:1 to 5:1), and number of targets (up to 3), using Indocyanine Green (ICG) as fluorescence contrast agents. An approximate extended Kalman filter based inverse algorithm has been adapted towards 3-D tomographic reconstructions. Single fluorescence target(s) was reconstructed when located: (i) up to 2.5 cm deep (at 1:0 contrast ratio) and 1.5 cm deep (up to 10:1 contrast ratio) for 0.45 cc-target; and (ii) 1.5 cm deep for target as small as 0.10 cc at 1:0 contrast ratio. In the case of multiple targets, two targets as close as 0.7 cm were tomographically resolved when located 1.5 cm deep. It was observed that performing multi-projection (here dual) based tomographic imaging using a priori target information from surface images, improved the target depth recovery over using single projection based imaging. From a total of 98 experimental phantom studies, the sensitivity and specificity of the imager was estimated as 81-86% and 43-50%, respectively. With 3-D tomographic imaging successfully demonstrated for the first time using a hand-held based optical imager, the clinical translation of this technology is promising upon further experimental validation from in-vitro and in-vivo studies.
Resumo:
The aim of this research was to demonstrate a high current and stable field emission (FE) source based on carbon nanotubes (CNTs) and electron multiplier microchannel plate (MCP) and design efficient field emitters. In recent years various CNT based FE devices have been demonstrated including field emission displays, x-ray source and many more. However to use CNTs as source in high powered microwave (HPM) devices higher and stable current in the range of few milli-amperes to amperes is required. To achieve such high current we developed a novel technique of introducing a MCP between CNT cathode and anode. MCP is an array of electron multipliers; it operates by avalanche multiplication of secondary electrons, which are generated when electrons strike channel walls of MCP. FE current from CNTs is enhanced due to avalanche multiplication of secondary electrons and in addition MCP also protects CNTs from irreversible damage during vacuum arcing. Conventional MCP is not suitable for this purpose due to the lower secondary emission properties of their materials. To achieve higher and stable currents we have designed and fabricated a unique ceramic MCP consisting of high SEY materials. The MCP was fabricated utilizing optimum design parameters, which include channel dimensions and material properties obtained from charged particle optics (CPO) simulation. Child Langmuir law, which gives the optimum current density from an electron source, was taken into account during the system design and experiments. Each MCP channel consisted of MgO coated CNTs which was chosen from various material systems due to its very high SEY. With MCP inserted between CNT cathode and anode stable and higher emission current was achieved. It was ∼25 times higher than without MCP. A brighter emission image was also evidenced due to enhanced emission current. The obtained results are a significant technological advance and this research holds promise for electron source in new generation lightweight, efficient and compact microwave devices for telecommunications in satellites or space applications. As part of this work novel emitters consisting of multistage geometry with improved FE properties were was also developed.
Resumo:
Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
Given the importance of color processing in computer vision and computer graphics, estimating and rendering illumination spectral reflectance of image scenes is important to advance the capability of a large class of applications such as scene reconstruction, rendering, surface segmentation, object recognition, and reflectance estimation. Consequently, this dissertation proposes effective methods for reflection components separation and rendering in single scene images. Based on the dichromatic reflectance model, a novel decomposition technique, named the Mean-Shift Decomposition (MSD) method, is introduced to separate the specular from diffuse reflectance components. This technique provides a direct access to surface shape information through diffuse shading pixel isolation. More importantly, this process does not require any local color segmentation process, which differs from the traditional methods that operate by aggregating color information along each image plane. ^ Exploiting the merits of the MSD method, a scene illumination rendering technique is designed to estimate the relative contributing specular reflectance attributes of a scene image. The image feature subset targeted provides a direct access to the surface illumination information, while a newly introduced efficient rendering method reshapes the dynamic range distribution of the specular reflectance components over each image color channel. This image enhancement technique renders the scene illumination reflection effectively without altering the scene’s surface diffuse attributes contributing to realistic rendering effects. ^ As an ancillary contribution, an effective color constancy algorithm based on the dichromatic reflectance model was also developed. This algorithm selects image highlights in order to extract the prominent surface reflectance that reproduces the exact illumination chromaticity. This evaluation is presented using a novel voting scheme technique based on histogram analysis. ^ In each of the three main contributions, empirical evaluations were performed on synthetic and real-world image scenes taken from three different color image datasets. The experimental results show over 90% accuracy in illumination estimation contributing to near real world illumination rendering effects. ^
Resumo:
This research identifies the value of a community-run NGO (nongovermental organization) in its work to advocate for a more positive image of Rio’s favelas (urban slums). Basic interpretive inquiry is used to analyze interviews with the principle spokesperson for the organization. Recommendations for further research are made.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^