922 resultados para High-Throughput Nucleotide Sequencing
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.
Resumo:
This report demonstrates a UV-embossed polymeric chip for protein separation and identification by Capillary Isoelectric Focusing (CIEF) and Matrix Assisted Laser Desportion/Ionization Mass Spectrometry (MALDI-MS). The polymeric chip has been fabricated by UV-embossing technique with high throughput; the issues in the fabrication have been addressed. In order to achieve high sensitivity of mass detection, five different types of UV curable polymer have been used as sample support to perform protein ionization in Mass Spectrometry (MS); the best results is compared to PMMA, which was the commonly used plastic chip for biomolecular separation. Experimental results show that signal from polyester is 12 times better than that of PMMA in terms of detection sensitivity. Finally, polyester chip is utilized to carry out CIEF to separate proteins, followed by MS identification.
Resumo:
Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.
Resumo:
While protein microarray technology has been successful in demonstrating its usefulness for large scale high-throughput proteome profiling, performance of antibody/antigen microarrays has been only moderately productive. Immobilization of either the capture antibodies or the protein samples on solid supports has severe drawbacks. Denaturation of the immobilized proteins as well as inconsistent orientation of antibodies/ligands on the arrays can lead to erroneous results. This has prompted a number of studies to address these challenges by immobilizing proteins on biocompatible surfaces, which has met with limited success. Our strategy relates to a multiplexed, sensitive and high-throughput method for the screening quantification of intracellular signalling proteins from a complex mixture of proteins. Each signalling protein to be monitored has its capture moiety linked to a specific oligo âtag’. The array involves the oligonucleotide hybridization-directed localization and identification of different signalling proteins simultaneously, in a rapid and easy manner. Antibodies have been used as the capture moieties for specific identification of each signaling protein. The method involves covalently partnering each antibody/protein molecule with a unique DNA or DNA derivatives oligonucleotide tag that directs the antibody to a unique site on the microarray due to specific hybridization with a complementary tag-probe on the array. Particular surface modifications and optimal conditions allowed high signal to noise ratio which is essential to the success of this approach.
Resumo:
Experimental and epidemiological studies demonstrate that fetal growth restriction and low birth weight enhance the risk of chronic diseases in adulthood. Derangements in tissue-specific epigenetic programming of fetal and placental tissues are a suggested mechanism of which DNA methylation is best understood. DNA methylation profiles in human tissue are mostly performed in DNA from white blood cells. The objective of this study was to assess DNA methylation profiles of IGF2 DMR and H19 in DNA derived from four tissues of the newborn. We obtained from 6 newborns DNA from fetal placental tissue (n = 5), umbilical cord CD34+ hematopoietic stem cells (HSC) and CD34- mononuclear cells (MNC) (n = 6), and umbilical cord Wharton jelly (n = 5). HCS were isolated using magnetic-activated cell separation. DNA methylation of the imprinted fetal growth genes IGF2 DMR and H19 was measured in all tissues using quantitative mass spectrometry. ANOVA testing showed tissue-specific differences in DNA methylation of IGF2 DMR (p value 0.002) and H19 (p value 0.001) mainly due to a higher methylation of IGF2 DMR in Wharton jelly (mean 0.65, sd 0.14) and a lower methylation of H19 in placental tissue (mean 0.25, sd 0.02) compared to other tissues. This study demonstrates the feasibility of the assessment of differential tissue specific DNA methylation. Although the results have to be confirmed in larger sample sizes, our approach gives opportunities to investigate epigenetic profiles as underlying mechanism of associations between pregnancy exposures and outcome, and disease risks in later life.
Resumo:
La captación de glucosa y su conversión en lactato juega un papel fundamental en el metabolismo tumoral, independientemente de la concentración de oxígeno presente en el tejido (efecto Warburg). Sin embrago, dicha captación varía de un tipo tumoral a otro, y dentro del mismo tumor, situación que podría depender de las características microambientales tumorales (fluctuaciones de oxígeno, presencia de otros tipos celulares) y de factores estresores asociados a los tratamientos. Se estudió el efecto de la hipoxia-reoxigenación (HR) y las radiaciones ionizantes (RI) sobre la captación de glucosa, en cultivos de líneas tumorales MCF-7 y HT-29, cultivadas de forma aislada o en cocultivo con la línea celular EAhy296. Se encontró que la captación de glucosa en HR es diferente para lo descrito en condiciones de hipoxia permanente y que es modificada en el cocultivo. Se identificaron poblaciones celulares dentro de la misma línea celular, de alta y baja captación de glucosa, lo que implicaría una simbiosis metabólica de la célula como respuesta adaptativa a las condiciones tumorales. Se evaluó la expresión de NRF2 y la translocación nuclear de NRF2 y HIF1a, como vías de respuesta a estrés celular e hipoxia. La translocación nuclear de las proteínas evaluadas explicaría el comportamiento metabólico de las células tumorales de seno, pero no de colon, por lo cual deben existir otras vías metabólicas implicadas. Las diferencias en el comportamiento de las células tumorales en HR en relación con hipoxia permitirá realizar planeaciones dosimétricas más dinámicas, que reevalúen las condiciones de oxigenación tumoral constantemente.
Resumo:
Tuna species of the genus Thunnus, such as the bluefin tunas, are some of the most important and yet most endangered trade fish in the world. Identification of these species in traded forms, however, may be difficult depending on the presentation of the products, which may hamper conservation efforts on trade control. In this paper, we validated a genetic methodology that can fully distinguish between the eight Thunnus species from any kind of processed tissue. Methodology: After testing several genetic markers, a complete discrimination of the eight tuna species was achieved using Forensically Informative Nucleotide Sequencing based primarily on the sequence variability of the hypervariable genetic marker mitochondrial DNA control region (mtDNA CR), followed, in some specific cases, by a second validation by a nuclear marker rDNA first internal transcribed spacer (ITS1). This methodology was able to distinguish all tuna species, including those belonging to the subgenus Neothunnus that are very closely related, and in consequence can not be differentiated with other genetic markers of lower variability. This methodology also took into consideration the presence of introgression that has been reported in past studies between T. thynnus, T. orientalis and T. alalunga. Finally, we applied the methodology to cross-check the species identity of 26 processed tuna samples. Conclusions: Using the combination of two genetic markers, one mitochondrial and another nuclear, allows a full discrimination between all eight tuna species. Unexpectedly, the genetic marker traditionally used for DNA barcoding, cytochrome oxidase 1, could not differentiate all species, thus its use as a genetic marker for tuna species identification is questioned
Resumo:
La presencia de microorganismos patógenos en alimentos es uno de los problemas esenciales en salud pública, y las enfermedades producidas por los mismos es una de las causas más importantes de enfermedad. Por tanto, la aplicación de controles microbiológicos dentro de los programas de aseguramiento de la calidad es una premisa para minimizar el riesgo de infección de los consumidores. Los métodos microbiológicos clásicos requieren, en general, el uso de pre-enriquecimientos no-selectivos, enriquecimientos selectivos, aislamiento en medios selectivos y la confirmación posterior usando pruebas basadas en la morfología, bioquímica y serología propias de cada uno de los microorganismos objeto de estudio. Por lo tanto, estos métodos son laboriosos, requieren un largo proceso para obtener resultados definitivos y, además, no siempre pueden realizarse. Para solucionar estos inconvenientes se han desarrollado diversas metodologías alternativas para la detección identificación y cuantificación de microorganismos patógenos de origen alimentario, entre las que destacan los métodos inmunológicos y moleculares. En esta última categoría, la técnica basada en la reacción en cadena de la polimerasa (PCR) se ha convertido en la técnica diagnóstica más popular en microbiología, y recientemente, la introducción de una mejora de ésta, la PCR a tiempo real, ha producido una segunda revolución en la metodología diagnóstica molecular, como pude observarse por el número creciente de publicaciones científicas y la aparición continua de nuevos kits comerciales. La PCR a tiempo real es una técnica altamente sensible -detección de hasta una molécula- que permite la cuantificación exacta de secuencias de ADN específicas de microorganismos patógenos de origen alimentario. Además, otras ventajas que favorecen su implantación potencial en laboratorios de análisis de alimentos son su rapidez, sencillez y el formato en tubo cerrado que puede evitar contaminaciones post-PCR y favorece la automatización y un alto rendimiento. En este trabajo se han desarrollado técnicas moleculares (PCR y NASBA) sensibles y fiables para la detección, identificación y cuantificación de bacterias patogénicas de origen alimentario (Listeria spp., Mycobacterium avium subsp. paratuberculosis y Salmonella spp.). En concreto, se han diseñado y optimizado métodos basados en la técnica de PCR a tiempo real para cada uno de estos agentes: L. monocytogenes, L. innocua, Listeria spp. M. avium subsp. paratuberculosis, y también se ha optimizado y evaluado en diferentes centros un método previamente desarrollado para Salmonella spp. Además, se ha diseñado y optimizado un método basado en la técnica NASBA para la detección específica de M. avium subsp. paratuberculosis. También se evaluó la aplicación potencial de la técnica NASBA para la detección específica de formas viables de este microorganismo. Todos los métodos presentaron una especificidad del 100 % con una sensibilidad adecuada para su aplicación potencial a muestras reales de alimentos. Además, se han desarrollado y evaluado procedimientos de preparación de las muestras en productos cárnicos, productos pesqueros, leche y agua. De esta manera se han desarrollado métodos basados en la PCR a tiempo real totalmente específicos y altamente sensibles para la determinación cuantitativa de L. monocytogenes en productos cárnicos y en salmón y productos derivados como el salmón ahumado y de M. avium subsp. paratuberculosis en muestras de agua y leche. Además este último método ha sido también aplicado para evaluar la presencia de este microorganismo en el intestino de pacientes con la enfermedad de Crohn's, a partir de biopsias obtenidas de colonoscopia de voluntarios afectados. En conclusión, este estudio presenta ensayos moleculares selectivos y sensibles para la detección de patógenos en alimentos (Listeria spp., Mycobacterium avium subsp. paratuberculosis) y para una rápida e inambigua identificación de Salmonella spp. La exactitud relativa de los ensayos ha sido excelente, si se comparan con los métodos microbiológicos de referencia y pueden serusados para la cuantificación de tanto ADN genómico como de suspensiones celulares. Por otro lado, la combinación con tratamientos de preamplificación ha resultado ser de gran eficiencia para el análisis de las bacterias objeto de estudio. Por tanto, pueden constituir una estrategia útil para la detección rápida y sensible de patógenos en alimentos y deberían ser una herramienta adicional al rango de herramientas diagnósticas disponibles para el estudio de patógenos de origen alimentario.
Resumo:
Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.
Resumo:
The authors present a systolic design for a simple GA mechanism which provides high throughput and unidirectional pipelining by exploiting the inherent parallelism in the genetic operators. The design computes in O(N+G) time steps using O(N2) cells where N is the population size and G is the chromosome length. The area of the device is independent of the chromosome length and so can be easily scaled by replicating the arrays or by employing fine-grain migration. The array is generic in the sense that it does not rely on the fitness function and can be used as an accelerator for any GA application using uniform crossover between pairs of chromosomes. The design can also be used in hybrid systems as an add-on to complement existing designs and methods for fitness function acceleration and island-style population management
Resumo:
The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
BACKGROUND: The serum peptidome may be a valuable source of diagnostic cancer biomarkers. Previous mass spectrometry (MS) studies have suggested that groups of related peptides discriminatory for different cancer types are generated ex vivo from abundant serum proteins by tumor-specific exopeptidases. We tested 2 complementary serum profiling strategies to see if similar peptides could be found that discriminate ovarian cancer from benign cases and healthy controls. METHODS: We subjected identically collected and processed serum samples from healthy volunteers and patients to automated polypeptide extraction on octadecylsilane-coated magnetic beads and separately on ZipTips before MALDI-TOF MS profiling at 2 centers. The 2 platforms were compared and case control profiling data analyzed to find altered MS peak intensities. We tested models built from training datasets for both methods for their ability to classify a blinded test set. RESULTS: Both profiling platforms had CVs of approximately 15% and could be applied for high-throughput analysis of clinical samples. The 2 methods generated overlapping peptide profiles, with some differences in peak intensity in different mass regions. In cross-validation, models from training data gave diagnostic accuracies up to 87% for discriminating malignant ovarian cancer from healthy controls and up to 81% for discriminating malignant from benign samples. Diagnostic accuracies up to 71% (malignant vs healthy) and up to 65% (malignant vs benign) were obtained when the models were validated on the blinded test set. CONCLUSIONS: For ovarian cancer, altered MALDI-TOF MS peptide profiles alone cannot be used for accurate diagnoses.
Resumo:
Real-time PCR protocols were developed to detect and discriminate 11 anastomosis groups (AGs) of Rhizoctonia solani using ribosomal internal transcribed spacer (ITS) regions (AG-1-IA, AG-1-IC, AG-2-1, AG-2-2, AG-4HGI+II, AG-4HGIII, AG-8) or beta-tubulin (AG-3, AG-4HGII, AG-5 and AG-9) sequences. All real-time assays were target group specific, except AG-2-2, which showed a weak cross-reaction with AG-2tabac. In addition, methods were developed for the high throughput extraction of DNA from soil and compost samples. The DNA extraction method was used with the AG-2-1 assay and shown to be quantitative with a detection threshold of 10-7 g of R. solani per g of soil. A similar DNA extraction efficiency was observed for samples from three contrasting soil types. The developed methods were then used to investigate the spatial distribution of R. solani AG-2-1 in field soils. Soil from shallow depths of a field planted with Brassica oleracea tested positive for R. solani AG-2-1 more frequently than soil collected from greater depths. Quantification of R. solani inoculum in field samples proved challenging due to low levels of inoculum in naturally occurring soils. The potential uses of real-time PCR and DNA extraction protocols to investigate the epidemiology of R. solani are discussed.