19 resultados para Recognition algorithms
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
A motivação para este trabalho vem da necessidade que o autor tem em poder registar as notas tocadas na guitarra durante o processo de improviso. Quando o músico está a improvisar na guitarra, muitas vezes não se recorda das notas tocadas no momento, este trabalho trata o desenvolvimento de uma aplicação para guitarristas, que permita registar as notas tocadas na guitarra eléctrica ou clássica. O sinal é adquirido a partir da guitarra e processado com requisitos de tempo real na captura do sinal. As notas produzidas pela guitarra eléctrica, ligada ao computador, são representadas no formato de tablatura e/ou partitura. Para este efeito a aplicação capta o sinal proveniente da guitarra eléctrica a partir da placa de som do computador e utiliza algoritmos de detecção de frequência e algoritmos de estimação de duração de cada sinal para construir o registo das notas tocadas. A aplicação é desenvolvida numa perspectiva multi-plataforma, podendo ser executada em diferentes sistemas operativos Windows e Linux, usando ferramentas e bibliotecas de domínio público. Os resultados obtidos mostram a possibilidade de afinar a guitarra com valores de erro na ordem de 2 Hz em relação às frequências de afinação standard. A escrita da tablatura apresenta resultados satisfatórios, mas que podem ser melhorados. Para tal será necessário melhorar a implementação de técnicas de processamento do sinal bem como a comunicação entre processos para resolver os problemas encontrados nos testes efectuados.
Resumo:
Este trabalho utiliza uma estrutura pin empilhada, baseada numa liga de siliceto de carbono amorfo hidrogenado (a-Si:H e/ou a-SiC:H), que funciona como filtro óptico na zona visível do espectro electromagnético. Pretende-se utilizar este dispositivo para realizar a demultiplexagem de sinais ópticos e desenvolver um algoritmo que permita fazer o reconhecimento autónomo do sinal transmitido em cada canal. O objectivo desta tese visa implementar um algoritmo que permita o reconhecimento autónomo da informação transmitida por cada canal através da leitura da fotocorrente fornecida pelo dispositivo. O tema deste trabalho resulta das conclusões de trabalhos anteriores, em que este dispositivo e outros de configuração idêntica foram analisados, de forma a explorar a sua utilização na implementação da tecnologia WDM. Neste trabalho foram utilizados três canais de transmissão (Azul – 470 nm, Verde – 525 nm e Vermelho – 626 nm) e vários tipos de radiação de fundo. Foram realizadas medidas da resposta espectral e da resposta temporal da fotocorrente do dispositivo, em diferentes condições experimentais. Variou-se o comprimento de onda do canal e o comprimento de onda do fundo aplicado, mantendo-se constante a intensidade do canal e a frequência de transmissão. Os resultados obtidos permitiram aferir sobre a influência da presença da radiação de fundo e da tensão aplicada ao dispositivo, usando diferentes sequências de dados transmitidos nos vários canais. Verificou-se, que sob polarização inversa, a radiação de fundo vermelho amplifica os valores de fotocorrente do canal azul e a radiação de fundo azul amplifica o canal vermelho e verde. Para polarização directa, apenas a radiação de fundo azul amplifica os valores de fotocorrente do canal vermelho. Enquanto para ambas as polarizações, a radiação de fundo verde, não tem uma grande influência nos restantes canais. Foram implementados dois algoritmos para proceder ao reconhecimento da informação de cada canal. Na primeira abordagem usou-se a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa e directa. Pela comparação das duas medidas desenvolveu-se e testou-se um algoritmo que permite o reconhecimento dos canais individuais. Numa segunda abordagem procedeu-se ao reconhecimento da informação de cada canal mas com aplicação de radiação de fundo, tendo-se usado a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa sem aplicação de radiação de fundo com a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa com aplicação de radiação de fundo. Pela comparação destas duas medidas desenvolveu-se e testou-se o segundo algoritmo que permite o reconhecimento dos canais individuais com base na aplicação de radiação de fundo.
Resumo:
Large area hydrogenated amorphous silicon single and stacked p-i-n structures with low conductivity doped layers are proposed as monochrome and color image sensors. The layers of the structures are based on amorphous silicon alloys (a-Si(x)C(1-x):H). The current-voltage characteristics and the spectral sensitivity under different bias conditions are analyzed. The output characteristics are evaluated under different read-out voltages and scanner wavelengths. To extract information on image shape, intensity and color, a modulated light beam scans the sensor active area at three appropriate bias voltages and the photoresponse in each scanning position ("sub-pixel") is recorded. The investigation of the sensor output under different scanner wavelengths and varying electrical bias reveals that the response can be tuned, thus enabling color separation. The operation of the sensor is exemplified and supported by a numerical simulation.
Resumo:
This work aims at investigating the impact of treating breast cancer using different radiation therapy (RT) techniques – forwardly-planned intensity-modulated, f-IMRT, inversely-planned IMRT and dynamic conformal arc (DCART) RT – and their effects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB treatment planning system were compared: Pencil Beam Convolution (PBC) and commercial Monte Carlo (iMC). Seven left-sided breast patients submitted to breast-conserving surgery were enrolled in the study. For each patient, four RT techniques – f-IMRT, IMRT using 2-fields and 5-fields (IMRT2 and IMRT5, respectively) and DCART – were applied. The dose distributions in the planned target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose–volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all techniques provided adequate coverage of the PTV. However, statistically significant dose differences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung and heart than tangential techniques. However, IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Differences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the iMC algorithm predicted.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Mestrado em Contabilidade
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e Computadores
Resumo:
Real structures can be thought as an assembly of components, as for instances plates, shells and beams. This later type of component is very commonly found in structures like frames which can involve a significant degree of complexity or as a reinforcement element of plates or shells. To obtain the desired mechanical behavior of these components or to improve their operating conditions when rehabilitating structures, one of the eventual parameters to consider for that purpose, when possible, is the location of the supports. In the present work, a beam-type structure is considered, and for a set of cases concerning different number and types of supports, as well as different load cases, the authors optimize the location of the supports in order to obtain minimum values of the maximum transverse deflection. The optimization processes are carried out using genetic algorithms. The results obtained, clearly show a good performance of the approach proposed. © 2014 IEEE.
Resumo:
Supramolecular chirality was achieved in solutions and thin films of a calixarene-containing chiral aryleneethynylene copolymer. The observed chiroptical activity, which is primarily allied with the formation of aggregates of high molecular weight polymer chains, is the result of a combination of intrachain and interchain effects. The former arises by the adoption of an induced helix-sense by the polymer main-chain while the latter comes from the exciton coupling of aromatic backbone transitions. The co-existence of bulky bis-calixKlarene units and chiral side-chains on the polymer skeleton prevents efficient pi-stacking of neighbouring chains, keeping the chiral assembly highly emissive. In contrast, for a model polymer lacking calixarene moieties, the chiroptical activity is dominated by strong interchain exciton couplings as a result of more favourable packing of polymer chains, leading to a marked decrease of photoluminescence in the aggregate state. The enantiomeric recognition abilities of both polymers towards (R)- and (S)-alpha-methylbenzylamine were examined. It was found that a significant enantiodiscrimination is exhibited by the calixarene-based polymer in the aggregate state.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.