987 resultados para Euclidean distance,


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a composite descriptor for shape retrieval has been proposed. The proposed descriptor is obtained from Generic Fourier Descriptors (GFD) for the shape region and the shape contour. A composite descriptor derived from GFD of the shape region and the shape contour is used for indexing and retrieval of shapes. Difference between two images is computed as the Euclidean distance between their composite descriptors. Experiments are performed to test the effectiveness of the proposed descriptor for retrieval of 2d images. Sets of composite descriptors, obtained by assigning different weights to the region component and the contour component, are also evaluated. Item S8 within the MPEG-7 Still Images Content Set is used for performing experiments; this dataset consists of 3621 still images. Experimental results show that the proposed descriptor is effective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a method for indexing and retrieval of images based on shapes of objects. The concept of connectivity is introduced. 3D models are used to represent 2D images. 2D images are decomposed a priori using connectivity which is followed by 3D model construction. 3D model descriptors are obtained for 3D models and used to represent the underlying 2D shapes. We have used spherical harmonics descriptors as the 3D model descriptors. Difference between two images is computed as the Euclidean distance between their descriptors. Experiments are performed to test the effectiveness of spherical harmonics for retrieval of 2D images. The proposed method is compared with methods based on principal components analysis (PCA) and generic Fourier descriptors (GFD). It is found that the proposed method is effective. Item S8 within the MPEG-7 still images content set is used for performing experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Improved access to multibeam sonar and underwater video technology is enabling scientists to use spatially-explicit, predictive modelling to improve our understanding of marine ecosystems. With the growing number of modelling approaches available, knowledge of the relative performance of different models in the marine environment is required. Habitat suitability of 5 demersal fish taxa in Discovery Bay, south-east Australia, were modelled using 10 presence-only algorithms: BIOCLIM, DOMAIN, ENFA (distance geometric mean [GM], distance harmonic mean [HM], median [M], area-adjusted median [Ma], median + extremum [Me], area-adjusted median + extremum [Mae] and minimum distance [Min]), and MAXENT. Model performance was assessed using kappa and area under curve (AUC) of the receiver operator characteristic. The influence of spatial range (area of occupancy) and environmental niches (marginality and tolerance) on modelling performance were also tested. MAXENT generally performed best, followed by ENFA-GM and -HM, DOMAIN, BIOCLIM, ENFA-M, -Min, -Ma, -Mae and -Me algorithms. Fish with clearly definable niches (i.e. high marginality) were most accurately modelled. Generally, Euclidean distance to nearest reef, HSI-b (backscatter), rugosity and maximum curvature were the most important variables in determining suitable habitat for the 5 demersal fish taxa investigated. This comparative study encourages ongoing use of presence-only approaches, particularly MAXENT, in modelling suitable habitat for demersal marine fishes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Feature aggregation is a critical technique in content- based image retrieval systems that employ multiple visual features to characterize image content. In this paper, the p-norm is introduced to feature aggregation that provides a framework to unify various previous feature aggregation schemes such as linear combination, Euclidean distance, Boolean logic and decision fusion schemes in which previous schemes are instances. Some insights of the mechanism of how various aggregation schemes work are discussed through the effects of model parameters in the unified framework. Experiments show that performances vary over feature aggregation schemes that necessitates an unified framework in order to optimize the retrieval performance according to individual queries and user query concept. Revealing experimental results conducted with IAPR TC-12 ImageCLEF2006 benchmark collection that contains over 20,000 photographic images are presented and discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a two-stage algorithm for vector quantization is proposed based on a self-organizing map (SOM) neural network. First, a conventional self-organizing map is modified to deal with dead codebooks in the learning process and is then used to obtain the codebook distribution structure for a given set of input data. Next, subblocks are classified based on the previous structure distribution with a prior criteria. Then, the conventional LBG algorithm is applied to these sub-blocks for data classification with initial values obtained via the SOM. Finally, extensive simulations illustrate that the proposed two-stage algorithm is very effective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The self organising map is a well established unsupervised
learning technique which is able to form sophisticated representations of an input data set. However, conventional Self Organising Map (SOM) algorithms are limited to the production of topological maps — that is, maps where distance between points on the map have a direct relationship to the Euclidean distance between the training vectors corresponding to those points.

It would be desirable to be able to create maps which form clusters on primitive attributes other than Euclidean distance; for example, clusters based upon orientation or shape. Such maps could provide a novel approach to pattern recognition tasks by providing a new method to associate groups of data.

In this paper, it is shown that the type of map produced by SOM algorithms is a direct consequence of the lateral connection strategy employed. Given this knowledge, a technique is required to establish the feasability of using an alternative lateral connection strategy. Such a technique is presented. Using this technique, it is possible to rule out lateral connection strategies that will not produce output states useful to the organisation process. This technique is demonstrated using conventional Laplacian interconnection as well as a number of novel interconnection strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new spectral clustering method called correlation preserving indexing (CPI), which is performed in the correlation similarity measure space. In this framework, the documents are projected into a low-dimensional semantic space in which the correlations between the documents in the local patches are maximized while the correlations between the documents outside these patches are minimized simultaneously. Since the intrinsic geometrical structure of the document space is often embedded in the similarities between the documents, correlation as a similarity measure is more suitable for detecting the intrinsic geometrical structure of the document space than euclidean distance. Consequently, the proposed CPI method can effectively discover the intrinsic structures embedded in high-dimensional document space. The effectiveness of the new method is demonstrated by extensive experiments conducted on various data sets and by comparison with existing document clustering methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Growing self-organizing map (GSOM) has been characterized as a knowledge discovery visualization application which outshines the traditional self-organizing map (SOM) due to its dynamic structure in which nodes can grow based on the input data. GSOM is utilized as a visualization tool in this paper to cluster fMRI finger tapping and non- tapping data, demonstrating the visualization capability to distinguish between tapping or non-tapping. A unique feature of GSOM is a parameter called the spread factor whose functionality is to control the spread of the GSOM map. By setting different levels of spread factor, different granularities of region of interests within tapping or non-tapping images can be visualized and analyzed. Euclidean distance based similarity calculation is used to quantify the visualized difference between tapping and non tapping images. Once the differences are identified, the spread factor is used to generate a more detailed view of those regions to provide a better visualization of the brain regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Failure mode and effect analysis (FMEA) is a popular safety and reliability analysis tool in examining potential failures of products, process, designs, or services, in a wide range of industries. While FMEA is a popular tool, the limitations of the traditional Risk Priority Number (RPN) model in FMEA have been highlighted in the literature. Even though many alternatives to the traditional RPN model have been proposed, there are not many investigations on the use of clustering techniques in FMEA. The main aim of this paper was to examine the use of a new Euclidean distance-based similarity measure and an incremental-learning clustering model, i.e., fuzzy adaptive resonance theory neural network, for similarity analysis and clustering of failure modes in FMEA; therefore, allowing the failure modes to be analyzed, visualized, and clustered. In this paper, the concept of a risk interval encompassing a group of failure modes is investigated. Besides that, a new approach to analyze risk ordering of different failure groups is introduced. These proposed methods are evaluated using a case study related to the edible bird nest industry in Sarawak, Malaysia. In short, the contributions of this paper are threefold: (1) a new Euclidean distance-based similarity measure, (2) a new risk interval measure for a group of failure modes, and (3) a new analysis of risk ordering of different failure groups. © 2014 The Natural Computing Applications Forum.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the increase use of location-based services, location privacy has recently raised serious concerns. To protect a user from being identified, a cloaked spatial region that contains other k-1 nearest neighbors of the user is used to replace the accurate position. In this paper, we consider location-aware applications that services are different among regions. To search nearest neighbors, we define a novel distance measurement that combines the semantic distance and the Euclidean distance to address the privacy preserving issue in the above-mentioned applications. We also propose an algorithm kNNH to implement our proposed method. The experimental results further suggest that the proposed distance metric and the algorithm can successfully retain the utility of the location services while preserving users’ privacy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2001-2012 IEEE. Sensing coverage is a fundamental design problem in wireless sensor networks (WSNs). This is because there is always a possibility that the sensor nodes may function incorrectly due to a number of reasons, such as failure, power, or noise instability, which negatively influences the coverage of the WSNs. In order to address this problem, we propose a fuzzy-based self-healing coverage scheme for randomly deployed mobile sensor nodes. The proposed scheme determines the uncovered sensing areas and then select the best mobile nodes to be moved to minimize the coverage hole. In addition, it distributes the sensor nodes uniformly considering Euclidean distance and coverage redundancy among the mobile nodes. We have performed an extensive performance analysis of the proposed scheme. The results of the experiment show that the proposed scheme outperforms the existing approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The methodology for selecting the individual numerical scale and prioritization method has recently been presented and justified in the analytic hierarchy process (AHP). In this study, we further propose a novel AHP-group decision making (GDM) model in a local context (a unique criterion), based on the individual selection of the numerical scale and prioritization method. The resolution framework of the AHP-GDM with the individual numerical scale and prioritization method is first proposed. Then, based on linguistic Euclidean distance (LED) and linguistic minimum violations (LMV), the novel consensus measure is defined so that the consensus degree among decision makers who use different numerical scales and prioritization methods can be analyzed. Next, a consensus reaching model is proposed to help decision makers improve the consensus degree. In this consensus reaching model, the LED-based and LMV-based consensus rules are proposed and used. Finally, a new individual consistency index and its properties are proposed for the use of the individual numerical scale and prioritization method in the AHP-GDM. Simulation experiments and numerical examples are presented to demonstrate the validity of the proposed model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Northeast of Brazil (NEB) shows high climate variability, ranging from semiarid regions to a rainy regions. According to the latest report of the Intergovernmental Panel on Climate Change, the NEB is highly susceptible to climate change, and also heavy rainfall events (HRE). However, few climatology studies about these episodes were performed, thus the objective main research is to compute the climatology and trend of the episodes number and the daily rainfall rate associated with HRE in the NEB and its climatologically homogeneous sub regions; relate them to the weak rainfall events and normal rainfall events. The daily rainfall data of the hydrometeorological network managed by the Agência Nacional de Águas, from 1972 to 2002. For selection of rainfall events used the technique of quantiles and the trend was identified using the Mann-Kendall test. The sub regions were obtained by cluster analysis, using as similarity measure the Euclidean distance and Ward agglomerative hierarchical method. The results show that the seasonality of the NEB is being intensified, i.e., the dry season is becoming drier and wet season getting wet. The El Niño and La Niña influence more on the amount of events regarding the intensity, but the sub-regions this influence is less noticeable. Using daily data reanalysis ERAInterim fields of anomalies of the composites of meteorological variables were calculated for the coast of the NEB, to characterize the synoptic environment. The Upper-level cyclonic vortex and the South atlantic convergene zone were identified as the main weather systems responsible for training of EPI on the coastland

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The precision and the fast identification of abnormalities of bottom hole are essential to prevent damage and increase production in the oil industry. This work presents a study about a new automatic approach to the detection and the classification of operation mode in the Sucker-rod Pumping through dynamometric cards of bottom hole. The main idea is the recognition of the well production status through the image processing of the bottom s hole dynamometric card (Boundary Descriptors) and statistics and similarity mathematics tools, like Fourier Descriptor, Principal Components Analysis (PCA) and Euclidean Distance. In order to validate the proposal, the Sucker-Rod Pumping system real data are used