970 resultados para Computational methods
Resumo:
The term proteome is used to define the complete set of proteins expressed in cells or tissues of an organism at a certain timepoint. Respectively, proteomics is used to describe the methods, which are used to study such proteomes. These methods include chromatographic and electrophoretic techniques for protein or peptide fractionation, mass spectrometry for their identification, and use of computational methods to assist the complicated data analysis. A primary aim in this Ph.D. thesis was to set-up, optimize, and develop proteomics methods for analysing proteins extracted from T-helper (Th) lymphocytes. First, high-throughput LC-MS/MS and ICAT labeling methods were set-up and optimized for analysing the microsomal fraction proteins extracted from Th lymphocytes. Later, iTRAQ method was optimized to study cytokine regulated protein expression in the nuclei of Th lymphocytes. High-throughput LC-MS/MS analyses, like ICAT and iTRAQ, produce large quantities of data and robust software and data analysis pipelines are needed. Therefore, different software programs used for analysing such data were evaluated. Moreover, a pre-filtering algorithm was developed to classify good-quality and bad-quality spectra prior to the database searches. Th-lymphocytes can differentiate into Th1 or Th2 cells based on surrounding antigens, co-stimulatory molecules, and cytokines. Both subsets have individual cytokine secretion profiles and specific functions. Th1 cells participate in the cellular immunity against intracellular pathogens, while Th2 cells have important role in the humoral immunity against extracellular parasites. An abnormal response of Th1 and Th2 cells and imbalance between the subsets are charasteristic of several diseases. Th1 specific reactions and cytokines have been detected in autoimmune diseases, while Th2 specific response and cytokine profile is common in allergy and asthma. In this Ph. D. thesis mass spectrometry-based proteomics was used to study the effects of Th1 and Th2 promoting cytokines IL-12 and IL-4 on the proteome of Th lymphocytes. Characterization of microsomal fraction proteome extracted from IL-12 treated lymphobasts and IL-4 stimulated cord blood CD4+ cells resulted in finding of cytokine regulated proteins. Galectin-1 and CD7 were down-regulated in IL-12 treated cells, while IL-4 stimulation decreased the expression of STAT1, MXA, GIMAP1, and GIMAP4. Interestingly, the transcription of both GIMAP genes was up-regulated in Th1 polarized cells and down-regulated in Th2 promoting conditions.
Resumo:
Healthy nutrition is accepted as a cornerstone of public health strategies for reducing the risk of noncommunicable conditions such as obesity, cardiovascular disease, and related morbidities. However, many research studies continue to focus on single or at most a few factors that may elicit a metabolic effect. These reductionist approaches resulted in: (1) exaggerated claims for nutrition as a cure or prevention of disease; (2) the wide use of empirically based dietary regimens, as if one fits all; and (3) frequent disappointment of consumers, patients, and healthcare providers about the real impact nutrition can make on medicine and health. Multiple factors including environment, host and microbiome genetics, social context, the chemical form of the nutrient, its (bio)availability, and chemical and metabolic interactions among nutrients all interact to result in nutrient requirement and in health outcomes. Advances in laboratory methodologies, especially in analytical and separation techniques, are making the chemical dissection of foods and their availability in physiological tissues possible in an unprecedented manner. These omics technologies have opened opportunities for extending knowledge of micronutrients and of their metabolic and endocrine roles. While these technologies are crucial, more holistic approaches to the analysis of physiology and environment, novel experimental designs, and more sophisticated computational methods are needed to advance our understanding of how nutrition influences health of individuals.
Resumo:
A résumé of the evolution of quantum chemistry methodologies is presented. The pioneering contributions of John A. Pople and Water Kohn, the 1998 Nobel Prize Laureates in Chemistry, to the development of quantum chemistry computational methods for studying the properties of molecules and their interaction is discussed.
Resumo:
The adsorption of H and S2- species on Pd (100) has been studied with ab initio, density-functional calculations and electrochemical methods. A cluster of five Pd atoms with a frozen geometry described the surface. The computational calculations were performed through the GAUSSIAN94 program, and the basis functions adapted to a pseudo-potential obtained by using the Generator Coordinate Method adapted to the this program. Using the cyclic voltammetry technique through a Model 283 Potentiostat/Galvanostat E.G.&G-PAR obtained the electrochemical results. The calculated chemisorption geometry has a Pd-H distance of 1.55Å, and the potential energy surface was calculated using the Becke3P86//(GCM/DFT/SBK) methodology. The adsorption of S2- ions on Pd surface obtained both through comparison between the experimental and theoretical results, at MP2 level, suggest a S2- absorption into the metallic cluster. The produced Pd-(S2-) system was show to be very stable under the employed experimental conditions. The paper has shows the powerful aid of computational methods to interpret adsorption experimental data.
Resumo:
Computational methods for the calculation of dynamical properties of fluids might consider the system as a continuum or as an assembly of molecules. Molecular dynamics (MD) simulation includes molecular resolution, whereas computational fluid dynamics (CFD) considers the fluid as a continuum. This work provides a review of hybrid methods MD/CFD recently proposed in the literature. Theoretical foundations, basic approaches of computational methods, and dynamical properties typically calculated by MD and CFD are first presented in order to appreciate the similarities and differences between these two methods. Then, methods for coupling MD and CFD, and applications of hybrid simulations MD/CFD, are presented.
Resumo:
In a previous study, substances with nematicidal properties were detected in the bark of Cryptocarya aschersoniana. Continuing such study, the methanol extract from this plant underwent fractionation guided by in vitro assays with the plant-parasitic nematode Meloidogyne exigua. Two active compounds were isolated and identified by spectroscopic methods as (E)-6-styrylpyran-2-one and (R)-goniothalamin. The latter compound was also active againstMeloidogyne incognita. In silico studies carried out with (R)-goniothalamin and the enzyme fumarate hydratase, which was extracted from the genome of Meloidogyne hapla and modeled using computational methods, suggested that this substance acts against nematodes by binding to a cavity close to the active site of the enzyme.
Resumo:
By alloying metals with other materials, one can modify the metal’s characteristics or compose an alloy which has certain desired characteristics that no pure metal has. The field is vast and complex, and phenomena that govern the behaviour of alloys are numerous. Theories cannot penetrate such complexity, and the scope of experiments is also limited. This is why the relatively new field of ab initio computational methods has much to give to this field. With these methods, one can extend the understanding given by theories, predict how some systems might behave, and be able to obtain information that is not there to see in physical experiments. This thesis pursues to contribute to the collective knowledge of this field in the light of two cases. The first part examines the oxidation of Ag/Cu, namely, the adsorption dynamics and oxygen induced segregation of the surface. Our results demonstrate that the presence of Ag on the Cu(100) surface layer strongly inhibits dissociative adsorption. Our results also confirmed that surface reconstruction does happen, as experiments predicted. Our studies indicate that 0.25 ML of oxygen is enough for Ag to diffuse towards the bulk, under the copper oxide layer. The other part elucidates the complex interplay of various energy and entropy contributions to the phase stability of paramagnetic duplex steel alloys. We were able to produce a phase stability map from first principles, and it agrees with experiments rather well. Our results also show that entropy contributions play a very important role on defining the phase stability. This is, to the author’s knowledge, the first ab initio study upon this subject.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
Large-scale genome projects have generated a rapidly increasing number of DNA sequences. Therefore, development of computational methods to rapidly analyze these sequences is essential for progress in genomic research. Here we present an automatic annotation system for preliminary analysis of DNA sequences. The gene annotation tool (GATO) is a Bioinformatics pipeline designed to facilitate routine functional annotation and easy access to annotated genes. It was designed in view of the frequent need of genomic researchers to access data pertaining to a common set of genes. In the GATO system, annotation is generated by querying some of the Web-accessible resources and the information is stored in a local database, which keeps a record of all previous annotation results. GATO may be accessed from everywhere through the internet or may be run locally if a large number of sequences are going to be annotated. It is implemented in PHP and Perl and may be run on any suitable Web server. Usually, installation and application of annotation systems require experience and are time consuming, but GATO is simple and practical, allowing anyone with basic skills in informatics to access it without any special training. GATO can be downloaded at [http://mariwork.iq.usp.br/gato/]. Minimum computer free space required is 2 MB.
Resumo:
Diplomityössä tutkitaan hitsatun duplex-teräksen, laatu: EN 1.4462 (Outokumpu laatu 2205) väsymislujuutta. Tutkimusmetodologia noudattaa sekä kokeellisia että laskennallisia menetelmiä. Kokeelliset menetelmät sisältävät hitsatun teräksen väsytystestaukset laboratoriossa, hitsausten jälkikäsittelyt (HiFIT) sekä perusaineelle ja hitseille tehtävät metallurgiset tutkimukset. Väsytyskokeista saatavia tuloksia verrataan kansainvälisen hitsausinstituutin (IIW) vahvistamiin rakennekohtaisiin standardeihin sekä kirjallisuudessa esiintyviin tutkimustuloksiin. Laskennalliset menetelmät sisältävät vertailulaskelmia tehollisen lovijännityksen (ENS) menetelmää hyödyntäen. Tehollisen lovijännityksen menetelmässä liitoksissa vaikuttavat teholliset lovijännitykset selvitetään elementtimenetelmän (FEM) avulla. Tulokset vahvistavat, että hitsauksella ja hitsausten jälkikäsittelyllä on suuri merkitys rakenteen kestoikään. Suurin osa väsytyskokeiden tuloksista osoitti parempia väsymiskestävyyden arvoja kuin rakennekohtaiset standardit, mutta liitosten liitosvirheiden todettiin heikentävän väsytyskestävyyttä. Jälkikäsittelyiden todettiin parantavan liitosten väsymiskestävyyden tuloksia ja todettiin tulosten olevan hyödynnettävissä mitoituksessa.
Resumo:
Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
Present thesis has discussed the design and synthesis of polymers suitable for nonlinear optics. Most of the molecules that were studied have shown good nonlinear optical activity. The second order nonlinear optical activity of the polymers was measured experimentally by Kurtz and Perry powder technique. The thesis comprises of eight chapters.The theory of NLO phenomenon and a review about the various nonlinear optical polymers has been discussed in chapter 1. The review has provided a survey of NLO active polymeric materials with a general introduction, which included the principles and the origin of nonlinear optics, and has given emphasis to polymeric materials for nonlinear optics, including guest-host systems, side chain polymers, main chain polymers, crosslinked polymers, chiral polymers etc.Chapter 2 has discussed the stability of the metal incorporated tetrapyrrole molecules, porphyrin, chlorin and bacteriochlorin.Chapter 3 has provided the NLO properties of certain organic molecules by computational tools. The chapter is divided into four parts. The first part has described the nonlinear optical properties of chromophore (D-n-A) and bichromophore (D-n-A-A-n-D) systems, which were separated by methylene spacer, by making use of DPT and semiempirical calculations.Chapter 4: A series of polyurethanes was prepared from cardanol, a renewable resource and a waste of the cashew industry by previously designed bifunctional and multifunctional polymers using quantum theoretical approach.Chapter 5: A series of chiral polyurethanes with main chain bis azo diol groups in the polymer backbone was designed and NLO activity was predicted by ZlNDO/ CV methods.In Chapter 7, polyurethanes were first designed by computational methods and the NLO properties were predicted by correction vector method. The designed bifunctional and multifunctional polyurethanes were synthesized by varying the chiral-achiral diol compositions
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.