965 resultados para Recognition algorithms
Resumo:
This paper delineates the development of a prototype hybrid knowledge-based system for the optimum design of liquid retaining structures by coupling the blackboard architecture, an expert system shell VISUAL RULE STUDIO and genetic algorithm (GA). Through custom-built interactive graphical user interfaces under a user-friendly environment, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking, and member sizing optimization. For structural optimization, GA is applied to the minimum cost design of structural systems with discrete reinforced concrete sections. The design of a typical example of the liquid retaining structure is illustrated. The results demonstrate extraordinarily converging speed as near-optimal solutions are acquired after merely exploration of a small portion of the search space. This system can act as a consultant to assist novice designers in the design of liquid retaining structures.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
Nos equinos, um grande número de eventos das fases iniciais do desenvolvimento embrionário são únicos e distintos daqueles que se verificam em outras espécies, incluindo os críticos mas pouco esclarecidos mecanismos de reconhecimento materno da gestação. Este processo fisiológico, através do qual o concepto sinaliza a sua presença ao organismo materno assegurando a manutenção do corpo lúteo (CL) primário e, consequentemente, dos níveis de progesterona necessários à sobrevivência e desenvolvimento do embrião, está pouco esclarecido. De facto, ainda não é claro qual o sinal embrionário primário que assegura a manutenção do CL nos equinos e, apesar do número de potenciais factores que contribuem para o reconhecimento e manutenção da gestação não parar de crescer, nenhum é capaz por si só de satisfazer todos os critérios que caracterizam um factor anti-luteolítico ou luteostático. Por outro lado, é geralmente aceite o conceito de que o reconhecimento materno da gestação é um fenómeno fisiológico de extrema importância e que qualquer falha, quer no envio quer na recepção do sinal apropriado, pode levar a morte embrionária. De facto, a perda de gestações durante ou à volta do espaço de tempo em que ocorre o reconhecimento materno da gestação (i.e dias 10-16 após a ovulação) é comum mas imprevisível (e, portanto, difícil de controlar e prevenir) na prática clínica, sendo uma causa de grandes perdas económicas.
Resumo:
A motivação para este trabalho vem da necessidade que o autor tem em poder registar as notas tocadas na guitarra durante o processo de improviso. Quando o músico está a improvisar na guitarra, muitas vezes não se recorda das notas tocadas no momento, este trabalho trata o desenvolvimento de uma aplicação para guitarristas, que permita registar as notas tocadas na guitarra eléctrica ou clássica. O sinal é adquirido a partir da guitarra e processado com requisitos de tempo real na captura do sinal. As notas produzidas pela guitarra eléctrica, ligada ao computador, são representadas no formato de tablatura e/ou partitura. Para este efeito a aplicação capta o sinal proveniente da guitarra eléctrica a partir da placa de som do computador e utiliza algoritmos de detecção de frequência e algoritmos de estimação de duração de cada sinal para construir o registo das notas tocadas. A aplicação é desenvolvida numa perspectiva multi-plataforma, podendo ser executada em diferentes sistemas operativos Windows e Linux, usando ferramentas e bibliotecas de domínio público. Os resultados obtidos mostram a possibilidade de afinar a guitarra com valores de erro na ordem de 2 Hz em relação às frequências de afinação standard. A escrita da tablatura apresenta resultados satisfatórios, mas que podem ser melhorados. Para tal será necessário melhorar a implementação de técnicas de processamento do sinal bem como a comunicação entre processos para resolver os problemas encontrados nos testes efectuados.
Resumo:
Este trabalho utiliza uma estrutura pin empilhada, baseada numa liga de siliceto de carbono amorfo hidrogenado (a-Si:H e/ou a-SiC:H), que funciona como filtro óptico na zona visível do espectro electromagnético. Pretende-se utilizar este dispositivo para realizar a demultiplexagem de sinais ópticos e desenvolver um algoritmo que permita fazer o reconhecimento autónomo do sinal transmitido em cada canal. O objectivo desta tese visa implementar um algoritmo que permita o reconhecimento autónomo da informação transmitida por cada canal através da leitura da fotocorrente fornecida pelo dispositivo. O tema deste trabalho resulta das conclusões de trabalhos anteriores, em que este dispositivo e outros de configuração idêntica foram analisados, de forma a explorar a sua utilização na implementação da tecnologia WDM. Neste trabalho foram utilizados três canais de transmissão (Azul – 470 nm, Verde – 525 nm e Vermelho – 626 nm) e vários tipos de radiação de fundo. Foram realizadas medidas da resposta espectral e da resposta temporal da fotocorrente do dispositivo, em diferentes condições experimentais. Variou-se o comprimento de onda do canal e o comprimento de onda do fundo aplicado, mantendo-se constante a intensidade do canal e a frequência de transmissão. Os resultados obtidos permitiram aferir sobre a influência da presença da radiação de fundo e da tensão aplicada ao dispositivo, usando diferentes sequências de dados transmitidos nos vários canais. Verificou-se, que sob polarização inversa, a radiação de fundo vermelho amplifica os valores de fotocorrente do canal azul e a radiação de fundo azul amplifica o canal vermelho e verde. Para polarização directa, apenas a radiação de fundo azul amplifica os valores de fotocorrente do canal vermelho. Enquanto para ambas as polarizações, a radiação de fundo verde, não tem uma grande influência nos restantes canais. Foram implementados dois algoritmos para proceder ao reconhecimento da informação de cada canal. Na primeira abordagem usou-se a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa e directa. Pela comparação das duas medidas desenvolveu-se e testou-se um algoritmo que permite o reconhecimento dos canais individuais. Numa segunda abordagem procedeu-se ao reconhecimento da informação de cada canal mas com aplicação de radiação de fundo, tendo-se usado a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa sem aplicação de radiação de fundo com a informação contida nas medidas de fotocorrente geradas pelo dispositivo sob polarização inversa com aplicação de radiação de fundo. Pela comparação destas duas medidas desenvolveu-se e testou-se o segundo algoritmo que permite o reconhecimento dos canais individuais com base na aplicação de radiação de fundo.
Resumo:
Large area hydrogenated amorphous silicon single and stacked p-i-n structures with low conductivity doped layers are proposed as monochrome and color image sensors. The layers of the structures are based on amorphous silicon alloys (a-Si(x)C(1-x):H). The current-voltage characteristics and the spectral sensitivity under different bias conditions are analyzed. The output characteristics are evaluated under different read-out voltages and scanner wavelengths. To extract information on image shape, intensity and color, a modulated light beam scans the sensor active area at three appropriate bias voltages and the photoresponse in each scanning position ("sub-pixel") is recorded. The investigation of the sensor output under different scanner wavelengths and varying electrical bias reveals that the response can be tuned, thus enabling color separation. The operation of the sensor is exemplified and supported by a numerical simulation.
Resumo:
This work aims at investigating the impact of treating breast cancer using different radiation therapy (RT) techniques – forwardly-planned intensity-modulated, f-IMRT, inversely-planned IMRT and dynamic conformal arc (DCART) RT – and their effects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB treatment planning system were compared: Pencil Beam Convolution (PBC) and commercial Monte Carlo (iMC). Seven left-sided breast patients submitted to breast-conserving surgery were enrolled in the study. For each patient, four RT techniques – f-IMRT, IMRT using 2-fields and 5-fields (IMRT2 and IMRT5, respectively) and DCART – were applied. The dose distributions in the planned target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose–volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all techniques provided adequate coverage of the PTV. However, statistically significant dose differences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung and heart than tangential techniques. However, IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Differences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the iMC algorithm predicted.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
The indiscriminate use of antibiotics in foodproducing animals has received increasing attention as a contributory factor in the international emergence of antibiotic- resistant bacteria (Woodward in Pesticide, veterinary and other residues in food, CRC Press, Boca Raton, 2004). Numerous analytical methods for quantifying antibacterial residues in edible animal products have been developed over years (Woodward in Pesticide, veterinary and other residues in food, CRC Press, Boca Raton, 2004; Botsoglou and Fletouris in Handbook of food analysis, residues and other food component analysis, Marcel Dekker, Ghent, 2004). Being Amoxicillin (AMOX) one of those critical veterinary drugs, efforts have been made to develop simple and expeditious methods for its control in food samples. In literature, only one AMOX-selective electrode has been reported so far. In that work, phosphotungstate:amoxycillinium ion exchanger was used as electroactive material (Shoukry et al. in Electroanalysis 6:914–917, 1994). Designing new materials based on molecularly imprinted polymers (MIPs) which are complementary to the size and charge of AMOX could lead to very selective interactions, thus enhancing the selectivity of the sensing unit. AMOXselective electrodes used imprinted polymers as electroactive materials having AMOX as target molecule to design a biomimetic imprinted cavity. Poly(vinyl chloride), sensors of methacrylic acid displayed Nernstian slopes (60.7 mV/decade) and low detection limits (2.9×10-5 mol/L). The potentiometric responses were not affected by pH within 4–5 and showed good selectivity. The electrodes were applied successfully to the analysis of real samples.
Resumo:
As a result of the stressful conditions in aquaculture facilities there is a high risk of bacterial infections among cultured fish. Chlortetracycline (CTC) is one of the antimicrobials used to solve this problem. It is a broad spectrum antibacterial active against a wide range of Gram-positive and Gram-negative bacteria. Numerous analytical methods for screening, identifying, and quantifying CTC in animal products have been developed over the years. An alternative and advantageous method should rely on expeditious and efficient procedures providing highly specific and sensitive measurements in food samples. Ion-selective electrodes (ISEs) could meet these criteria. The only ISE reported in literature for this purpose used traditional electro-active materials. A selectivity enhancement could however be achieved after improving the analyte recognition by molecularly imprinted polymers (MIPs). Several MIP particles were synthesized and used as electro-active materials. ISEs based in methacrylic acid monomers showed the best analytical performance according to slope (62.5 and 68.6 mV/decade) and detection limit (4.1×10−5 and 5.5×10−5 mol L−1). The electrodes displayed good selectivity. The ISEs are not affected by pH changes ranging from 2.5 to 13. The sensors were successfully applied to the analysis of serum, urine and fish samples.
Resumo:
Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.