893 resultados para Feature Extraction Algorithms
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Asymptomatic Plasmodium infection is a new challenge for public health in the American region. The polymerase chain reaction (PCR) is the best method for diagnosing subpatent parasitemias. In endemic areas, blood collection is hampered by geographical distances and deficient transport and storage conditions of the samples. Because DNA extraction from blood collected on filter paper is an efficient method for molecular studies in high parasitemic individuals, we investigated whether the technique could be an alternative for Plasmodium diagnosis among asymptomatic and pauciparasitemic subjects. In this report we compared three different methods (Chelex®-saponin, methanol and TRIS-EDTA) of DNA extraction from blood collected on filter paper from asymptomatic Plasmodium-infected individuals. Polymerase chain reaction assays for detection of Plasmodium species showed the best results when the Chelex®-saponin method was used. Even though the sensitivity of detection was approximately 66% and 31% for P. falciparum and P. vivax, respectively, this method did not show the effectiveness in DNA extraction required for molecular diagnosis of Plasmodium. The development of better methods for extracting DNA from blood collected on filter paper is important for the diagnosis of subpatent malarial infections in remote areas and would contribute to establishing the epidemiology of this form of infection.
Resumo:
Currently there are several methods to extract bacterial DNA based on different principles. However, the amount and the quality of the DNA obtained by each one of those methods is highly variable and microorganism dependent, as illustrated by coagulase-negative staphylococci (CoNS) which have a thick cell wall that is difficult to lyse. This study was designed to compare the quality and the amount of CoNS DNA, extracted by four different techniques: two in-house protocols and two commercial kits. DNA amount and quality determination was performed through spectrophotometry. The extracted DNA was also analyzed using agarose gel electrophoresis and by PCR. 267 isolates of CoNS were used in this study. The column method and thermal lyses showed better results with regard to DNA quality (mean ratio of A260/280 = 1.95) and average concentration of DNA (), respectively. All four methods tested provided appropriate DNA for PCR amplification, but with different yields. DNA quality is important since it allows the application of a large number of molecular biology techniques, and also it's storage for a longer period of time. In this sense the extraction method based on an extraction column presented the best results for CoNS.
Resumo:
O sector do turismo é uma área francamente em crescimento em Portugal e que tem desenvolvido a sua divulgação e estratégia de marketing. Contudo, apenas se prende com indicadores de desempenho e de oferta instalada (número de quartos, hotéis, voos, estadias), deixando os indicadores estatísticos em segundo plano. De acordo com o “ Travel & tourism Competitiveness Report 2013”, do World Economic Forum, classifica Portugal em 72º lugar no que respeita à qualidade e cobertura da informação estatística, disponível para o sector do Turismo. Refira-se que Espanha ocupa o 3º lugar. Uma estratégia de mercado, sem base analítica, que sustente um quadro de orientações específico e objetivo, com relevante conhecimento dos mercados alvo, dificilmente é compreensível ou até mesmo materializável. A implementação de uma estrutura de Business Intelligence que permita a realização de um levantamento e tratamento de dados que possibilite relacionar e sustentar os resultados obtidos no sector do turismo revela-se fundamental e crucial, para que sejam criadas estratégias de mercado. Essas estratégias são realizadas a partir da informação dos turistas que nos visitam, e dos potenciais turistas, para que possam ser cativados no futuro. A análise das características e dos padrões comportamentais dos turistas permite definir perfis distintos e assim detetar as tendências de mercado, de forma a promover a oferta dos produtos e serviços mais adequados. O conhecimento obtido permite, por um lado criar e disponibilizar os produtos mais atrativos para oferecer aos turistas e por outro informá-los, de uma forma direcionada, da existência desses produtos. Assim, a associação de uma recomendação personalizada que, com base no conhecimento de perfis do turista proceda ao aconselhamento dos melhores produtos, revela-se como uma ferramenta essencial na captação e expansão de mercado.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
Face à estagnação da tecnologia uniprocessador registada na passada década, aos principais fabricantes de microprocessadores encontraram na tecnologia multi-core a resposta `as crescentes necessidades de processamento do mercado. Durante anos, os desenvolvedores de software viram as suas aplicações acompanhar os ganhos de performance conferidos por cada nova geração de processadores sequenciais, mas `a medida que a capacidade de processamento escala em função do número de processadores, a computação sequencial tem de ser decomposta em várias partes concorrentes que possam executar em paralelo, para que possam utilizar as unidades de processamento adicionais e completar mais rapidamente. A programação paralela implica um paradigma completamente distinto da programação sequencial. Ao contrário dos computadores sequenciais tipificados no modelo de Von Neumann, a heterogeneidade de arquiteturas paralelas requer modelos de programação paralela que abstraiam os programadores dos detalhes da arquitectura e simplifiquem o desenvolvimento de aplicações concorrentes. Os modelos de programação paralela mais populares incitam os programadores a identificar instruções concorrentes na sua lógica de programação, e a especificá-las sob a forma de tarefas que possam ser atribuídas a processadores distintos para executarem em simultâneo. Estas tarefas são tipicamente lançadas durante a execução, e atribuídas aos processadores pelo motor de execução subjacente. Como os requisitos de processamento costumam ser variáveis, e não são conhecidos a priori, o mapeamento de tarefas para processadores tem de ser determinado dinamicamente, em resposta a alterações imprevisíveis dos requisitos de execução. `A medida que o volume da computação cresce, torna-se cada vez menos viável garantir as suas restrições temporais em plataformas uniprocessador. Enquanto os sistemas de tempo real se começam a adaptar ao paradigma de computação paralela, há uma crescente aposta em integrar execuções de tempo real com aplicações interativas no mesmo hardware, num mundo em que a tecnologia se torna cada vez mais pequena, leve, ubíqua, e portável. Esta integração requer soluções de escalonamento que simultaneamente garantam os requisitos temporais das tarefas de tempo real e mantenham um nível aceitável de QoS para as restantes execuções. Para tal, torna-se imperativo que as aplicações de tempo real paralelizem, de forma a minimizar os seus tempos de resposta e maximizar a utilização dos recursos de processamento. Isto introduz uma nova dimensão ao problema do escalonamento, que tem de responder de forma correcta a novos requisitos de execução imprevisíveis e rapidamente conjeturar o mapeamento de tarefas que melhor beneficie os critérios de performance do sistema. A técnica de escalonamento baseado em servidores permite reservar uma fração da capacidade de processamento para a execução de tarefas de tempo real, e assegurar que os efeitos de latência na sua execução não afectam as reservas estipuladas para outras execuções. No caso de tarefas escalonadas pelo tempo de execução máximo, ou tarefas com tempos de execução variáveis, torna-se provável que a largura de banda estipulada não seja consumida por completo. Para melhorar a utilização do sistema, os algoritmos de partilha de largura de banda (capacity-sharing) doam a capacidade não utilizada para a execução de outras tarefas, mantendo as garantias de isolamento entre servidores. Com eficiência comprovada em termos de espaço, tempo, e comunicação, o mecanismo de work-stealing tem vindo a ganhar popularidade como metodologia para o escalonamento de tarefas com paralelismo dinâmico e irregular. O algoritmo p-CSWS combina escalonamento baseado em servidores com capacity-sharing e work-stealing para cobrir as necessidades de escalonamento dos sistemas abertos de tempo real. Enquanto o escalonamento em servidores permite partilhar os recursos de processamento sem interferências a nível dos atrasos, uma nova política de work-stealing que opera sobre o mecanismo de capacity-sharing aplica uma exploração de paralelismo que melhora os tempos de resposta das aplicações e melhora a utilização do sistema. Esta tese propõe uma implementação do algoritmo p-CSWS para o Linux. Em concordância com a estrutura modular do escalonador do Linux, ´e definida uma nova classe de escalonamento que visa avaliar a aplicabilidade da heurística p-CSWS em circunstâncias reais. Ultrapassados os obstáculos intrínsecos `a programação da kernel do Linux, os extensos testes experimentais provam que o p-CSWS ´e mais do que um conceito teórico atrativo, e que a exploração heurística de paralelismo proposta pelo algoritmo beneficia os tempos de resposta das aplicações de tempo real, bem como a performance e eficiência da plataforma multiprocessador.
Resumo:
Dissertação para a obtenção do Grau de Mestre em Engenharia Química e Bioquímica
Resumo:
Phenolic acids are aromatic secondary plant metabolites, widely spread throughout the plant kingdom. Due to their biological and pharmacological properties, they have been playing an important role in phytotherapy and consequently techniques for their separation and purification are in need. This thesis aims at exploring new sustainable separation processes based on ionic liquids (ILs) in the extraction of biologically active phenolic acids. For that purpose, three phenolic acids with similar chemical structures were selected: cinnamic acid, p-coumaric acid and caffeic acid. In the last years, it has been shown that ionic liquids-based aqueous biphasic systems (ABSs) are valid alternatives for the extraction, recovery and purification of biomolecules when compared to conventional ABS or extractions carried out with organic solvents. In particular, cholinium-based ILs represent a clear step towards a greener chemistry, while providing means for the implementation of efficient techniques for the separation and purification of biomolecules. In this work, ABSs were implemented using cholinium carboxylate ILs using either high charge density inorganic salt (K3PO4) or polyethylene glycol (PEG) to promote the phase separation of aqueous solutions containing three different phenolic acids. These systems allow for the evaluation of effect of chemical structure of the anion on the extraction efficiency. Only one imidazolium-based IL was used in order to establish the effect of the cation chemical structure. The selective extraction of one single acid was also researched. Overall, it was observed that phenolic acids display very complex behaviours in aqueous solutions, from dimerization to polymerization and also hetero-association are quite frequent phenomena, depending on the pH conditions. These phenomena greatly hinder the correct quantification of these acids in solution.
Resumo:
Human Activity Recognition systems require objective and reliable methods that can be used in the daily routine and must offer consistent results according with the performed activities. These systems are under development and offer objective and personalized support for several applications such as the healthcare area. This thesis aims to create a framework for human activities recognition based on accelerometry signals. Some new features and techniques inspired in the audio recognition methodology are introduced in this work, namely Log Scale Power Bandwidth and the Markov Models application. The Forward Feature Selection was adopted as the feature selection algorithm in order to improve the clustering performances and limit the computational demands. This method selects the most suitable set of features for activities recognition in accelerometry from a 423th dimensional feature vector. Several Machine Learning algorithms were applied to the used accelerometry databases – FCHA and PAMAP databases - and these showed promising results in activities recognition. The developed algorithm set constitutes a mighty contribution for the development of reliable evaluation methods of movement disorders for diagnosis and treatment applications.
Resumo:
Human-Computer Interaction have been one of the main focus of the technological community, specially the Natural User Interfaces (NUI) field of research as, since the launch of the Kinect Sensor, the goal to achieve fully natural interfaces just got a lot closer to reality. Taking advantage of this conditions the following research work proposes to compute the hand skeleton in order to recognize Sign Language Shapes. The proposed solution uses the Kinect Sensor to achieve a good segmentation and image analysis algorithms to extend the skeleton from the extraction of high-level features. In order to recognize complex hand shapes the current research work proposes the redefinition of the hand contour making it immutable to translation, rotation and scaling operations, and a set of tools to achieve a good recognition. The validation of the proposed solution extended the Kinects Software Development Kit to allow the developer to access the new set of inferred points and created a template-matching based platform that uses the contour to define the hand shape, this prototype was tested in a set of predefined conditions and showed to have a good success ration and has proven to be eligible for real-time scenarios.
Resumo:
The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.
Resumo:
Nowadays, authentication studies for paintings require a multidisciplinary approach, based on the contribution of visual features analysis but also on characterizations of materials and techniques. Moreover, it is important that the assessment of the authorship of a painting is supported by technical studies of a selected number of original artworks that cover the entire career of an artist. This dissertation is concerned about the work of modernist painter Amadeo de Souza-Cardoso. It is divided in three parts. In the first part, we propose a tool based on image processing that combines information obtained by brushstroke and materials analysis. The resulting tool provides qualitative and quantitative evaluation of the authorship of the paintings; the quantitative element is particularly relevant, as it could be crucial in solving authorship controversies, such as judicial disputes. The brushstroke analysis was performed by combining two algorithms for feature detection, namely Gabor filter and Scale Invariant Feature Transform. Thanks to this combination (and to the use of the Bag-of-Features model), the proposed method shows an accuracy higher than 90% in distinguishing between images of Amadeo’s paintings and images of artworks by other contemporary artists. For the molecular analysis, we implemented a semi-automatic system that uses hyperspectral imaging and elemental analysis. The system provides as output an image that depicts the mapping of the pigments present, together with the areas made using materials not coherent with Amadeo’s palette, if any. This visual output is a simple and effective way of assessing the results of the system. The tool proposed based on the combination of brushstroke and molecular information was tested in twelve paintings obtaining promising results. The second part of the thesis presents a systematic study of four selected paintings made by Amadeo in 1917. Although untitled, three of these paintings are commonly known as BRUT, Entrada and Coty; they are considered as his most successful and genuine works. The materials and techniques of these artworks have never been studied before. The paintings were studied with a multi-analytical approach using micro-Energy Dispersive X-ray Fluorescence spectroscopy, micro-Infrared and Raman Spectroscopy, micro-Spectrofluorimetry and Scanning Electron Microscopy. The characterization of Amadeo’s materials and techniques used on his last paintings, as well as the investigation of some of the conservation problems that affect these paintings, is essential to enrich the knowledge on this artist. Moreover, the study of the materials in the four paintings reveals commonalities between the paintings BRUT and Entrada. This observation is supported also by the analysis of the elements present in a photograph of a collage (conserved at the Art Library of the Calouste Gulbenkian Foundation), the only remaining evidence of a supposed maquete of these paintings. The final part of the thesis describes the application of the image processing tools developed in the first part of the thesis on a set of case studies; this experience demonstrates the potential of the tool to support painting analysis and authentication studies. The brushstroke analysis was used as additional analysis on the evaluation process of four paintings attributed to Amadeo, and the system based on hyperspectral analysis was applied on the painting dated 1917. The case studies therefore serve as a bridge between the first two parts of the dissertation.
Resumo:
Introduction Polymerase chain reaction (PCR) may offer an alternative diagnostic option when clinical signs and symptoms suggest visceral leishmaniasis (VL) but microscopic scanning and serological tests provide negative results. PCR using urine is sensitive enough to diagnose human visceral leishmaniasis (VL). However, DNA quality is a crucial factor for successful amplification. Methods A comparative performance evaluation of DNA extraction methods from the urine of patients with VL using two commercially available extraction kits and two phenol-chloroform protocols was conducted to determine which method produces the highest quality DNA suitable for PCR amplification, as well as the most sensitive, fast and inexpensive method. All commercially available kits were able to shorten the duration of DNA extraction. Results With regard to detection limits, both phenol: chloroform extraction and the QIAamp DNA Mini Kit provided good results (0.1 pg of DNA) for the extraction of DNA from a parasite smaller than Leishmania (Leishmania) infantum (< 100fg of DNA). However, among 11 urine samples from subjects with VL, better performance was achieved with the phenol:chloroform method (8/11) relative to the QIAamp DNA Mini Kit (4/11), with a greater number of positive samples detected at a lower cost using PCR. Conclusion Our results demonstrate that phenol:chloroform with an ethanol precipitation prior to extraction is the most efficient method in terms of yield and cost, using urine as a non-invasive source of DNA and providing an alternative diagnostic method at a low cost.
Resumo:
Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.
Resumo:
Application of Experimental Design techniques has proven to be essential in various research fields, due to its statistical capability of processing the effect of interactions among independent variables, known as factors, in a system’s response. Advantages of this methodology can be summarized in more resource and time efficient experimentations while providing more accurate results. This research emphasizes the quantification of 4 antioxidants extraction, at two different concentration, prepared according to an experimental procedure and measured by a Photodiode Array Detector. Experimental planning was made following a Central Composite Design, which is a type of DoE that allows to consider the quadratic component in Response Surfaces, a component that includes pure curvature studies on the model produced. This work was executed with the intention of analyzing responses, peak areas obtained from chromatograms plotted by the Detector’s system, and comprehending if the factors considered – acquired from an extensive literary review – produced the expected effect in response. Completion of this work will allow to take conclusions regarding what factors should be considered for the optimization studies of antioxidants extraction in a Oca (Oxalis tuberosa) matrix.