59 resultados para Hair shaft DNA extraction
Resumo:
A Diabetes mellitus tipo 2 (DM2) é uma patologia de etiologia múltipla à qual estão associados vários factores genéticos. A Enzima Conversora da Angiotensina (ECA) tem sido alvo de vários estudos pela sua relação com factores pró-inflamatórios, pró-oxidantes e pró-fibrose, sendo o polimorfismo de Inserção/Delecção o mais estudado. Neste contexto, o objectivo deste estudo é assim verificar a distribuição deste polimorfismo numa amostra de indivíduos de nacionalidade portuguesa e verificar a sua possível associação com a DM2. Para tal, foram analisadas 87 amostras (controlos n =24 e diabéticos n =63) de indivíduos de nacionalidade portuguesa. As amostras foram submetidas a um processo de extracção de ADN, sendo posteriormente amplificadas por Polymerase Chain Reaction e analisadas por eletroforese em gel de agarose a 1%. Observou-se uma prevalência de 8% (n=7) com genótipo I/I, 38% (n=33) com genótipo I/D e 54% (n=47) com genótipo D/D. A amostra em estudo demonstrou assim estar sob o equilíbrio Hardy-Weinberg. Observou-se também uma associação entre níveis mais elevados de glicemia e o genótipo I/I (p=0,019). Na análise da utilização de insulina no controlo dos níveis de glicemia na DM2, observou-se uma maior proporção de indivíduos com genótipo D/D. Este estudo demonstra a importância do investimento da caracterização genética em patologias metabólicas multifactoriais como a DM2.
Resumo:
β-lactamases are hydrolytic enzymes that inactivate the β-lactam ring of antibiotics such as penicillins and cephalosporins. The major diversity of studies carried out until now have mainly focused on the characterization of β-lactamases recovered among clinical isolates of Gram-positive staphylococci and Gram-negative enterobacteria, amongst others. However, only some studies refer to the detection and development of β-lactamases carriers in healthy humans, sick animals, or even in strains isolated from environmental stocks such as food, water, or soils. Considering this, we proposed a 10-week laboratory programme for the Biochemistry and Molecular Biology laboratory for majors in the health, environmental, and agronomical sciences. During those weeks, students would be dealing with some basic techniques such as DNA extraction, bacterial transformation, polymerase chain reaction (PCR), gel electrophoresis, and the use of several bioinformatics tools. These laboratory exercises would be conducted as a mini research project in which all the classes would be connected with the previous ones. This curriculum was compared in an experiment involving two groups of students from two different majors. The new curriculum, with classes linked together as a mini research project, was taught to a major in Pharmacy and an old curriculum was taught to students from environmental health. The results showed that students who were enrolled in the new curriculum obtained better results in the final exam than the students who were enrolled in the former curriculum. Likewise, these students were found to be more enthusiastic during the laboratory classes than those from the former curriculum.
Resumo:
We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.
Resumo:
Deoxyribonucleic acid, or DNA, is the most fundamental aspect of life but present day scientific knowledge has merely scratched the surface of the problem posed by its decoding. While experimental methods provide insightful clues, the adoption of analysis tools supported by the formalism of mathematics will lead to a systematic and solid build-up of knowledge. This paper studies human DNA from the perspective of system dynamics. By associating entropy and the Fourier transform, several global properties of the code are revealed. The fractional order characteristics emerge as a natural consequence of the information content. These properties constitute a small piece of scientific knowledge that will support further efforts towards the final aim of establishing a comprehensive theory of the phenomena involved in life.
Resumo:
With the electricity market liberalization, distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity customers. In this environment all consumers are free to choose their electricity supplier. A fair insight on the customer´s behaviour will permit the definition of specific contract aspects based on the different consumption patterns. In this paper Data Mining (DM) techniques are applied to electricity consumption data from a utility client’s database. To form the different customer´s classes, and find a set of representative consumption patterns, we have used the Two-Step algorithm which is a hierarchical clustering algorithm. Each consumer class will be represented by its load profile resulting from the clustering operation. Next, to characterize each consumer class a classification model will be constructed with the C5.0 classification algorithm.
Resumo:
This paper analyzes DNA information using entropy and phase plane concepts. First, the DNA code is converted into a numerical format by means of histograms that capture DNA sequence length ranging from one up to ten bases. This strategy measures dynamical evolutions from 4 up to 410 signal states. The resulting histograms are analyzed using three distinct entropy formulations namely the Shannon, Rényie and Tsallis definitions. Charts of entropy versus sequence length are applied to a set of twenty four species, characterizing 486 chromosomes. The information is synthesized and visualized by adapting phase plane concepts leading to a categorical representation of chromosomes and species.
Resumo:
This paper addresses the DNA code analysis in the perspective of dynamics and fractional calculus. Several mathematical tools are selected to establish a quantitative method without distorting the alphabet represented by the sequence of DNA bases. The association of Gray code, Fourier transform and fractional calculus leads to a categorical representation of species and chromosomes.
Resumo:
This paper studies the human DNA in the perspective of signal processing. Six wavelets are tested for analyzing the information content of the human DNA. By adopting real Shannon wavelet several fundamental properties of the code are revealed. A quantitative comparison of the chromosomes and visualization through multidimensional and dendograms is developed.
Resumo:
This paper studies the DNA code of eleven mammals from the perspective of fractional dynamics. The application of Fourier transform and power law trendlines leads to a categorical representation of species and chromosomes. The DNA information reveals long range memory characteristics.
Resumo:
This paper aims to study the relationships between chromosomal DNA sequences of twenty species. We propose a methodology combining DNA-based word frequency histograms, correlation methods, and an MDS technique to visualize structural information underlying chromosomes (CRs) and species. Four statistical measures are tested (Minkowski, Cosine, Pearson product-moment, and Kendall τ rank correlations) to analyze the information content of 421 nuclear CRs from twenty species. The proposed methodology is built on mathematical tools and allows the analysis and visualization of very large amounts of stream data, like DNA sequences, with almost no assumptions other than the predefined DNA “word length.” This methodology is able to produce comprehensible three-dimensional visualizations of CR clustering and related spatial and structural patterns. The results of the four test correlation scenarios show that the high-level information clusterings produced by the MDS tool are qualitatively similar, with small variations due to each correlation method characteristics, and that the clusterings are a consequence of the input data and not method’s artifacts.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
This paper reports a novel application of microwave-assisted extraction (MAE) of polyphenols from brewer’s spent grains (BSG). A 24 orthogonal composite design was used to obtain the optimal conditions of MAE. The influence of the MAE operational parameters (extraction time, temperature, solvent volume and stirring speed) on the extraction yield of ferulic acid was investigated through response surface methodology. The results showed that the optimal conditions were 15 min extraction time, 100 °C extraction temperature, 20 mL of solvent, and maximum stirring speed. Under these conditions, the yield of ferulic acid was 1.31±0.04% (w/w), which was fivefold higher than that obtained with conventional solid–liquid extraction techniques. The developed new extraction method considerably reduces extraction time, energy and solvent consumption, while generating fewer wastes. HPLC-DADMS analysis indicated that other hydroxycinnamic acids and several ferulic acid dehydrodimers, as well as one dehydrotrimer were also present, confirming that BSG is a valuable source of antioxidant compounds.
Resumo:
This paper presents the study of the remediation of sandy soils containing six of the most common contaminants (benzene, toluene, ethylbenzene, xylene, trichloroethylene and perchloroethylene) using soil vapour extraction (SVE). The influence of soil water content on the process efficiency was evaluated considering the soil type and the contaminant. For artificially contaminated soils with negligible clay contents and natural organic matter it was concluded that: (i) all the remediation processes presented efficiencies above 92%; (ii) an increase of the soil water content led to a more time-consuming remediation; (iii) longer remediation periods were observed for contaminants with lower vapour pressures and lower water solubilities due to mass transfer limitations. Based on these results an easy and relatively fast procedure was developed for the prediction of the remediation times of real soils; 83% of the remediation times were predicted with relative deviations below 14%.
Resumo:
Soil vapor extraction (SVE) is an efficient, well-known and widely applied soil remediation technology. However, under certain conditions it cannot achieve the defined cleanup goals, requiring further treatment, for example, through bioremediation (BR). The sequential application of these technologies is presented as a valid option but is not yet entirely studied. This work presents the study of the remediation of ethylbenzene (EB)-contaminated soils, with different soil water and natural organic matter (NOMC) contents, using sequential SVE and BR. The obtained results allow the conclusion that: (1) SVE was sufficient to reach the cleanup goals in 63% of the experiments (all the soils with NOMC below 4%), (2) higher NOMCs led to longer SVE remediation times, (3) BR showed to be a possible and cost-effective option when EB concentrations were lower than 335 mg kgsoil −1, and (4) concentrations of EB above 438 mg kgsoil −1 showed to be inhibitory for microbial activity.