953 resultados para Artificial Intelligence, Constraint Programming, set variables, representation
Resumo:
Template matching is a technique widely used for finding patterns in digital images. A good template matching should be able to detect template instances that have undergone geometric transformations. In this paper, we proposed a grayscale template matching algorithm named Ciratefi, invariant to rotation, scale, translation, brightness and contrast and its extension to color images. We introduce CSSIM (color structural similarity) for comparing the similarity of two color image patches and use it in our algorithm. We also describe a scheme to determine automatically the appropriate parameters of our algorithm and use pyramidal structure to improve the scale invariance. We conducted several experiments to compare grayscale and color Ciratefis with SIFT, C-color-SIFT and EasyMatch algorithms in many different situations. The results attest that grayscale and color Ciratefis are more accurate than the compared algorithms and that color-Ciratefi outperforms grayscale Ciratefi most of the time. However, Ciratefi is slower than the other algorithms.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.
Resumo:
We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in ""stable"" and ""unstable"" ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is Suitable for finding many query images in A, multi-scale searching and partial occlusion-robust template matching. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
The discrete-time neural network proposed by Hopfield can be used for storing and recognizing binary patterns. Here, we investigate how the performance of this network on pattern recognition task is altered when neurons are removed and the weights of the synapses corresponding to these deleted neurons are divided among the remaining synapses. Five distinct ways of distributing such weights are evaluated. We speculate how this numerical work about synaptic compensation may help to guide experimental studies on memory rehabilitation interventions.
Resumo:
SKAN: Skin Scanner - System for Skin Cancer Detection Using Adaptive Techniques - combines computer engineering concepts with areas like dermatology and oncology. Its objective is to discern images of skin cancer, specifically melanoma, from others that show only common spots or other types of skin diseases, using image recognition. This work makes use of the ABCDE visual rule, which is often used by dermatologists for melanoma identification, to define which characteristics are analyzed by the software. It then applies various algorithms and techniques, including an ellipse-fitting algorithm, to extract and measure these characteristics and decide whether the spot is a melanoma or not. The achieved results are presented with special focus on the adaptive decision-making and its effect on the diagnosis. Finally, other applications of the software and its algorithms are presented.
Resumo:
This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-P\"orn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
Resumo:
This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.
Resumo:
In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
Resumo:
This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.
Resumo:
Argumentation is modelled as a game where the payoffs are measured in terms of the probability that the claimed conclusion is, or is not, defeasibly provable, given a history of arguments that have actually been exchanged, and given the probability of the factual premises. The probability of a conclusion is calculated using a standard variant of Defeasible Logic, in combination with standard probability calculus. It is a new element of the present approach that the exchange of arguments is analysed with game theoretical tools, yielding a prescriptive and to some extent even predictive account of the actual course of play. A brief comparison with existing argument-based dialogue approaches confirms that such a prescriptive account of the actual argumentation has been almost lacking in the approaches proposed so far.
Resumo:
We explore of the feasibility of the computationally oriented institutional agency framework proposed by Governatori and Rotolo testing it against an industrial strength scenario. In particular we show how to encode in defeasible logic the dispute resolution policy described in Article 67 of FIDIC.
Resumo:
This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved.
Resumo:
In this paper, we describe a model of the human visual system (HVS) based on the wavelet transform. This model is largely based on a previously proposed model, but has a number of modifications that make it more amenable to potential integration into a wavelet based image compression scheme. These modifications include the use of a separable wavelet transform instead of the cortex transform, the application of a wavelet contrast sensitivity function (CSP), and a simplified definition of subband contrast that allows us to predict noise visibility directly from wavelet coefficients. Initially, we outline the luminance, frequency, and masking sensitivities of the HVS and discuss how these can be incorporated into the wavelet transform. We then outline a number of limitations of the wavelet transform as a model of the HVS, namely the lack of translational invariance and poor orientation sensitivity. In order to investigate the efficacy of this wavelet based model, a wavelet visible difference predictor (WVDP) is described. The WVDP is then used to predict visible differences between an original and compressed (or noisy) image. Results are presented to emphasize the limitations of commonly used measures of image quality and to demonstrate the performance of the WVDP, The paper concludes with suggestions on bow the WVDP can be used to determine a visually optimal quantization strategy for wavelet coefficients and produce a quantitative measure of image quality.
Resumo:
The new technologies for Knowledge Discovery from Databases (KDD) and data mining promise to bring new insights into a voluminous growing amount of biological data. KDD technology is complementary to laboratory experimentation and helps speed up biological research. This article contains an introduction to KDD, a review of data mining tools, and their biological applications. We discuss the domain concepts related to biological data and databases, as well as current KDD and data mining developments in biology.