893 resultados para Feature Extraction Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: INTRODUCTION : Molecular analyses are auxiliary tools for detecting Koch's bacilli in clinical specimens from patients with suspected tuberculosis (TB). However, there are still no efficient diagnostic tests that combine high sensitivity and specificity and yield rapid results in the detection of TB. This study evaluated single-tube nested polymerase chain reaction (STNPCR) as a molecular diagnostic test with low risk of cross contamination for detecting Mycobacterium tuberculosis in clinical samples. METHODS: Mycobacterium tuberculosis deoxyribonucleic acid (DNA) was detected in blood and urine samples by STNPCR followed by agarose gel electrophoresis. In this system, reaction tubes were not opened between the two stages of PCR (simple and nested). RESULTS: STNPCR demonstrated good accuracy in clinical samples with no cross contamination between microtubes. Sensitivity in blood and urine, analyzed in parallel, was 35%-62% for pulmonary and 41%-72% for extrapulmonary TB. The specificity of STNPCR was 100% in most analyses, depending on the type of clinical sample (blood or urine) and clinical form of disease (pulmonary or extrapulmonary). CONCLUSIONS: STNPCR was effective in detecting TB, especially the extrapulmonary form for which sensitivity was higher, and had the advantage of less invasive sample collection from patients for whom a spontaneous sputum sample was unavailable. With low risk of cross contamination, the STNPCR can be used as an adjunct to conventional methods for diagnosing TB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remote sensing - the acquisition of information about an object or phenomenon without making physical contact with the object - is applied in a multitude of different areas, ranging from agriculture, forestry, cartography, hydrology, geology, meteorology, aerial traffic control, among many others. Regarding agriculture, an example of application of this information is regarding crop detection, to monitor existing crops easily and help in the region’s strategic planning. In any of these areas, there is always an ongoing search for better methods that allow us to obtain better results. For over forty years, the Landsat program has utilized satellites to collect spectral information from Earth’s surface, creating a historical archive unmatched in quality, detail, coverage, and length. The most recent one was launched on February 11, 2013, having a number of improvements regarding its predecessors. This project aims to compare classification methods in Portugal’s Ribatejo region, specifically regarding crop detection. The state of the art algorithms will be used in this region and their performance will be analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world energy consumption is expected to increase strongly in coming years, because of the emerging economies. Biomass is the only renewable carbon resource that is abundant enough to be used as a source of energy Grape pomace is one of the most abundant agro-industrial residues in the world, being a good biomass resource. The aim of this work is the valorization of grape pomace from white grapes (WWGP) and from red grapes (RWGP), through the extraction of phenolic compounds with antioxidant activity, as well as through the extraction/hydrolysis of carbohydrates, using subcritical water, or hot compressed water (HCW). The main focus of this work is the optimization of the process for WWGP, while for RWGP only one set of parameters were tested. The temperatures used were 170, 190 and 210 °C for WWGP, and 180 °C for RWGP. The water flow rates were 5 and 10 mL/min, and the pressure was always kept at 100 bar. Before performing HCW assays, both residues were characterized, revealing that WWGP is very rich in free sugars (around 40%) essentially glucose and fructose, while RWGP has higher contents of structural sugars, lignin, lipids and protein. For WWGP the best results were achieved at 210 °C and 10 mL/min: higher yield in water soluble compounds (69 wt.%), phenolics extraction (26.2 mg/g) and carbohydrates recovery (49.3 wt.% relative to the existing 57.8%). For RWGP the conditions were not optimized (180 °C and 5 mL/min), and the values of the yield in water soluble compounds (25 wt.%), phenolics extraction (19.5 mg/g) and carbohydrates recovery (11.4 wt.% relative to the existing 33.5%) were much lower. The antioxidant activity of the HCW extracts from each assay was determined, the best result being obtained for WWGP, namely for extracts obtained at 210 °C (EC50=20.8 μg/mL; EC50 = half maximum effective concentration; EC50 = 22.1 μg/mL for RWGP, at 180 ºC).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: INTRODUCTION: Before 2004, the occurrence of acute Chagas disease (ACD) by oral transmission associated with food was scarcely known or investigated. Originally sporadic and circumstantial, ACD occurrences have now become frequent in the Amazon region, with recently related outbreaks spreading to several Brazilian states. These cases are associated with the consumption of açai juice by waste reservoir animals or insect vectors infected with Trypanosoma cruzi in endemic areas. Although guidelines for processing the fruit to minimize contamination through microorganisms and parasites exist, açai-based products must be assessed for quality, for which the demand for appropriate methodologies must be met. METHODS: Dilutions ranging from 5 to 1,000 T. cruzi CL Brener cells were mixed with 2mL of acai juice. Four Extraction of T. cruzi DNA methods were used on the fruit, and the cetyltrimethyl ammonium bromide (CTAB) method was selected according to JRC, 2005. RESULTS: DNA extraction by the CTAB method yielded satisfactory results with regard to purity and concentration for use in PCR. Overall, the methods employed proved that not only extraction efficiency but also high sensitivity in amplification was important. CONCLUSIONS: The method for T. cruzi detection in food is a powerful tool in the epidemiological investigation of outbreaks as it turns epidemiological evidence into supporting data that serve to confirm T. cruzi infection in the foods. It also facilitates food quality control and assessment of good manufacturing practices involving acai-based products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis was focused on the production, extraction and characterization of chitin:β-glucan complex (CGC). In this process, glycerol byproduct from the biodiesel industry was used as carbon source. The selected CGC producing yeast was Komagataella pastoris (formerly known as Pichia pastoris), due the fact that to achieved high cell densities using as carbon source glycerol from the biodiesel industry. Firstly, a screening of K. pastoris strains was performed in shake flask assays, in order to select the strain of K. pastoris with better performance, in terms of growth, using glycerol as a carbon source. K. pastoris strain DSM 70877 achieved higher final cell densities (92-97 g/l), using pure glycerol (99%, w/v) and in glycerol from the biodiesel industry (86%, w/v), respectively, compared to DSM 70382 strain (74-82 g/l). Based on these shake flask assays results, the wild type DSM 70877 strain was selected to proceed for cultivation in a 2 l bioreactor, using glycerol byproduct (40 g/l), as sole carbon source. Biomass production by K. pastoris was performed under controlled temperature and pH (30.0 ºC and 5.0, respectively). More than 100 g/l biomass was obtained in less than 48 h. The yield of biomass on a glycerol basis was 0.55 g/g during the batch phase and 0.63 g/g during the fed-batch phase. In order to optimize the downstream process, by increasing extraction and purification efficiency of CGC from K. pastoris biomass, several assays were performed. It was found that extraction with 5 M NaOH at 65 ºC, during 2 hours, associated to neutralization with HCl, followed by successive washing steps with deionised water until conductivity of ≤20μS/cm, increased CGC purity. The obtained copolymer, CGCpure, had a chitin:glucan molar ratio of 25:75 mol% close to commercial CGC samples extracted from A. niger mycelium, kiOsmetine from Kitozyme (30:70 mol%). CGCpure was characterized by solid-state Nuclear Magnetic Resonance (NMR) spectroscopy and Differential Scanning Calorimetry (DCS), revealing a CGC with higher purity than a CGC commercial (kiOsmetine). In order to optimize CGC production, a set of batch cultivation experiments was performed to evaluate the effect of pH (3.5–6.5) and temperature (20–40 ºC) on the specific cell growth rate, CGC production and polymer composition. Statistical tools (response surface methodology and central composite design) were used. The CGC content in the biomass and the volumetric productivity (rp) were not significantly affected within the tested pH and temperature ranges. In contrast, the effect of pH and temperature on the CGC molar ratio was more pronounced. The highest chitin: β-glucan molar ratio (> 14:86) was obtained for the mid-range pH (4.5-5.8) and temperatures (26–33 ºC). The ability of K. pastoris to synthesize CGC with different molar ratios as a function of pH and temperature is a feature that can be exploited to obtain tailored polymer compositions.(...)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the paper. The authors would like to thank Dr. Elaine DeBock for reviewing the manuscript.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Immune systems have been used in the last years to inspire approaches for several computational problems. This paper focus on behavioural biometric authentication algorithms’ accuracy enhancement by using them more than once and with different thresholds in order to first simulate the protection provided by the skin and then look for known outside entities, like lymphocytes do. The paper describes the principles that support the application of this approach to Keystroke Dynamics, an authentication biometric technology that decides on the legitimacy of a user based on his typing pattern captured on he enters the username and/or the password and, as a proof of concept, the accuracy levels of one keystroke dynamics algorithm when applied to five legitimate users of a system both in the traditional and in the immune inspired approaches are calculated and the obtained results are compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PhD thesis in Bioengineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.