41 resultados para Place recognition algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calculation of the dose is one of the key steps in radiotherapy planning1-5. This calculation should be as accurate as possible, and over the years it became feasible through the implementation of new algorithms to calculate the dose on the treatment planning systems applied in radiotherapy. When a breast tumour is irradiated, it is fundamental a precise dose distribution to ensure the planning target volume (PTV) coverage and prevent skin complications. Some investigations, using breast cases, showed that the pencil beam convolution algorithm (PBC) overestimates the dose in the PTV and in the proximal region of the ipsilateral lung. However, underestimates the dose in the distal region of the ipsilateral lung, when compared with analytical anisotropic algorithm (AAA). With this study we aim to compare the performance in breast tumors of the PBC and AAA algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liver steatosis is mainly a textural abnormality of the hepatic parenchyma due to fat accumulation on the hepatic vesicles. Today, the assessment is subjectively performed by visual inspection. Here a classifier based on features extracted from ultrasound (US) images is described for the automatic diagnostic of this phatology. The proposed algorithm estimates the original ultrasound radio-frequency (RF) envelope signal from which the noiseless anatomic information and the textural information encoded in the speckle noise is extracted. The features characterizing the textural information are the coefficients of the first order autoregressive model that describes the speckle field. A binary Bayesian classifier was implemented and the Bayes factor was calculated. The classification has revealed an overall accuracy of 100%. The Bayes factor could be helpful in the graphical display of the quantitative results for diagnosis purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis, also known as, fatty liver, from ultrasound images. The features, automatically extracted from the ultrasound images used by the classifier, are basically the ones used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The main novelty of the method is the utilization of the speckle noise that corrupts the ultrasound images to compute textural features of the liver parenchyma relevant for the diagnosis. The algorithm uses the Bayesian framework to compute a noiseless image, containing anatomic and echogenic information of the liver and a second image containing only the speckle noise used to compute the textural features. The classification results, with the Bayes classifier using manually classified data as ground truth show that the automatic classifier reaches an accuracy of 95% and a 100% of sensitivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supramolecular chirality was achieved in solutions and thin films of a calixarene-containing chiral aryleneethynylene copolymer. The observed chiroptical activity, which is primarily allied with the formation of aggregates of high molecular weight polymer chains, is the result of a combination of intrachain and interchain effects. The former arises by the adoption of an induced helix-sense by the polymer main-chain while the latter comes from the exciton coupling of aromatic backbone transitions. The co-existence of bulky bis-calixKlarene units and chiral side-chains on the polymer skeleton prevents efficient pi-stacking of neighbouring chains, keeping the chiral assembly highly emissive. In contrast, for a model polymer lacking calixarene moieties, the chiroptical activity is dominated by strong interchain exciton couplings as a result of more favourable packing of polymer chains, leading to a marked decrease of photoluminescence in the aggregate state. The enantiomeric recognition abilities of both polymers towards (R)- and (S)-alpha-methylbenzylamine were examined. It was found that a significant enantiodiscrimination is exhibited by the calixarene-based polymer in the aggregate state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.