960 resultados para Computational studies
Resumo:
We present a novel, web-accessible scientific workflow system which makes large-scale comparative studies accessible without programming or excessive configuration requirements. GPFlow allows a workflow defined on single input values to be automatically lifted to operate over collections of input values and supports the formation and processing of collections of values without the need for explicit iteration constructs. We introduce a new model for collection processing based on key aggregation and slicing which guarantees processing integrity and facilitates automatic association of inputs, allowing scientific users to manage the combinatorial explosion of data values inherent in large scale comparative studies. The approach is demonstrated using a core task from comparative genomics, and builds upon our previous work in supporting combined interactive and batch operation, through a lightweight web-based user interface.
Resumo:
ForscherInnen aus Sozial- und Geisteswissenschaften interessieren sich seit nunmehr einem Jahrzehnt für Blogs, Online-Tagebücher und Online-Journale. Auch wenn die Zuwachsrate der Blogosphäre seit der Blütezeit des Bloggens in den 2000ern stagniert, bleiben Blogs doch eines der bedeutendsten Genres der internetgestützten Kommunikation. Tatsächlich ist nach der Massenabwanderung zu Facebook, Twitter und anderen erst in jüngerer Zeit entstandenen Kommunikationsmitteln eine etwas kleinere, aber umso stärker etablierte Blogosphäre von engagierten und eingeschworenen Teilnehmenden übriggeblieben. Blogs werden mittlerweile als Teil einer institutionellen, persönlichen und Gruppen-Kommunikationstrategie akzeptiert. In Stil und Inhalt liegen sie zwischen den statischeren Informationen auf konventionellen Websites und den ständig aktualisierten Facebook- und Twitter-Newsfeeds. Blogs ermöglichen es ihren AutorInnen (und deren KommentatorInnen), bestimmte Themen im Umfang von einigen hundert bis zu einigen tausend Wörtern zu durchdenken, in kürzeren Posts ins Detail zu gehen und ggf. intensiver durchdachte Texte anderswo zu publizieren. Zudem sind sie auch ein sehr flexibles Medium: Bilder, Audio-, Video- sowie andere Materialien können mühelos eingefügt werden - und natürlich auch das grundlegende Instrument des Bloggens: Hyperlinks.
Resumo:
Currently, finite element analyses are usually done by means of commercial software tools. Accuracy of analysis and computational time are two important factors in efficiency of these tools. This paper studies the effective parameters in computational time and accuracy of finite element analyses performed by ANSYS and provides the guidelines for the users of this software whenever they us this software for study on deformation of orthopedic bone plates or study on similar cases. It is not a fundamental scientific study and only shares the findings of the authors about structural analysis by means of ANSYS workbench. It gives an idea to the readers about improving the performance of the software and avoiding the traps. The solutions provided in this paper are not the only possible solutions of the problems and in similar cases there are other solutions which are not given in this paper. The parameters of solution method, material model, geometric model, mesh configuration, number of the analysis steps, program controlled parameters and computer settings are discussed through thoroughly in this paper.
Resumo:
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Resumo:
In this chapter, we draw out the relevant themes from a range of critical scholarship from the small body of digital media and software studies work that has focused on the politics of Twitter data and the sociotechnical means by which access is regulated. We highlight in particular the contested relationships between social media research (in both academic and non-academic contexts) and the data wholesale, retail, and analytics industries that feed on them. In the second major section of the chapter we discuss in detail the pragmatic edge of these politics in terms of what kinds of scientific research is and is not possible in the current political economy of Twitter data access. Finally, at the end of the chapter we return to the much broader implications of these issues for the politics of knowledge, demonstrating how the apparently microscopic level of how the Twitter API mediates access to Twitter data actually inscribes and influences the macro level of the global political economy of science itself, through re-inscribing institutional and traditional disciplinary privilege We conclude with some speculations about future developments in data rights and data philanthropy that may at least mitigate some of these negative impacts.
Resumo:
Provides an accessible foundation to Bayesian analysis using real world models This book aims to present an introduction to Bayesian modelling and computation, by considering real case studies drawn from diverse fields spanning ecology, health, genetics and finance. Each chapter comprises a description of the problem, the corresponding model, the computational method, results and inferences as well as the issues that arise in the implementation of these approaches. Case Studies in Bayesian Statistical Modelling and Analysis: •Illustrates how to do Bayesian analysis in a clear and concise manner using real-world problems. •Each chapter focuses on a real-world problem and describes the way in which the problem may be analysed using Bayesian methods. •Features approaches that can be used in a wide area of application, such as, health, the environment, genetics, information science, medicine, biology, industry and remote sensing. Case Studies in Bayesian Statistical Modelling and Analysis is aimed at statisticians, researchers and practitioners who have some expertise in statistical modelling and analysis, and some understanding of the basics of Bayesian statistics, but little experience in its application. Graduate students of statistics and biostatistics will also find this book beneficial.
Resumo:
The photocatalytic ability of cubic Bi1.5ZnNb1.5O7 (BZN) pyrochlore for the decolorization of an acid orange 7 (AO7) azo dye in aqueous solution under ultraviolet (UV) irradiation has been investigated for the first time. BZN catalyst powders prepared using low temperature sol-gel and higher temperature solid-state methods have been evaluated and their reaction rates have been compared.The experimental band gap energy has been estimated from the optical absorption edge and has been used as reference for theoretical calculations. The electronic band structure of BZN has been investigated using first-principles density functional theory (DFT) calculations for random, completely and partially ordered solid solutions of Zn cations in both the A and B sites of the pyrochlore structure.The nature of the orbitals in the valence band (VB) and the conduction band (CB) has been identified and the theoretical band gap energy has been discussed in terms of the DFT model approximations.
Resumo:
Bi1.5ZnTa1.5O7 (BZT) has been synthesized using an alkoxide based sol-gel reaction route. The evolution of the phases produced from the alkoxide precursors and their properties have been characterized as function of temperature using a combination of thermogravimetric analysis (TGA) coupled with mass spectrometry (MS), infrared emission spectrometry (IES), X-ray diffraction (XRD), ultraviolet and visible (UV-Vis) spectroscopy, Raman spectroscopy, and N2 adsorption/desorption isotherms. The lowest sintering temperature (600∘C) to obtain phase pure BZT powders with high surface area (14.5m2/g) has been determined from the thermal decomposition and phase analyses.The photocatalytic activity of the BZT powders has been tested for the decolorization of organic azo-dye and found to be photoactive under UV irradiation.The electronic band structure of the BZT has been investigated using density functional theory (DFT) calculations to determine the band gap energy (3.12 eV) and to compare it with experimental band gap (3.02 eV at 800∘C) from optical absorptionmeasurements. An excellent match is obtained for an assumption of Zn cation substitutions at specifically ordered sites in the BZT structure.
Resumo:
This paper presents experimental and computational results of oxy-fuel burner operating on classical flame and lameless mode for heat release rate of 26 kW/m3. The uniqueness of the burner arises from a slight asymmetric injection of oxygen at near sonic velocities. Measurements of emperature, species, total heat flux, radiative heat flux and NOx emission were carried out inside the furnace and the flow field was computationally analyzed. The flame studies were carried out for coaxial flow of oxygen and fuel jets with similar inlet velocities. This configuration results in slow mixing between fuel and oxygen and the flame is developed at distance away from the burner and the flame is bright/white in colour. In the flameless mode a slight asymmetric injection of the high velocity oxygen jet leads to a large asymmetric recirculation pattern with the recirculation ratio of 25 and the resulting flame is weak bluish in colour with little soot and acetylene formation. The classical flame in comparison is characterised by soot and acetylene formation, higher NOx and noise generation. The distribution of temperature and heat flux in the furnace is more uniform with flameless mode than with flame mode.
Resumo:
The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.
Resumo:
In the present work the methods of relativistic quantum chemistry have been applied to a number of small systems containing heavy elements, for which relativistic effects are important. First, a thorough introduction of the methods used is presented. This includes some of the general methods of computational chemistry and a special section dealing with how to include the effects of relativity in quantum chemical calculations. Second, after this introduction the results obtained are presented. Investigations on high-valent mercury compounds are presented and new ways to synthesise such compounds are proposed. The methods described were applied to certain systems containing short Pt-Tl contacts. It was possible to explain the interesting bonding situation in these compounds. One of the most common actinide compounds, uranium hexafluoride was investigated and a new picture of the bonding was presented. Furthermore the rareness of uranium-cyanide compounds was discussed. In a foray into the chemistry of gold, well known for its strong relativistic effects, investigations on different gold systems were performed. Analogies between Au$^+$ and platinum on one hand and oxygen on the other were found. New systems with multiple bonds to gold were proposed to experimentalists. One of the proposed systems was spectroscopically observed shortly afterwards. A very interesting molecule, which was theoretically predicted a few years ago is WAu$_{12}$. Some of its properties were calculated and the bonding situation was discussed. In a further study on gold compounds it was possible to explain the substitution pattern in bis[phosphane-gold(I)] thiocyanate complexes. This is of some help to experimentalists as the systems could not be crystallised and the structure was therefore unknown. Finally, computations on one of the heaviest elements in the periodic table were performed. Calculation on compounds containing element 110, darmstadtium, showed that it behaves similarly as its lighter homologue platinum. The extreme importance of relativistic effects for these systems was also shown.
Resumo:
We explore here the acceleration of convergence of iterative methods for the solution of a class of quasilinear and linear algebraic equations. The specific systems are the finite difference form of the Navier-Stokes equations and the energy equation for recirculating flows. The acceleration procedures considered are: the successive over relaxation scheme; several implicit methods; and a second-order procedure. A new implicit method—the alternating direction line iterative method—is proposed in this paper. The method combines the advantages of the line successive over relaxation and alternating direction implicit methods. The various methods are tested for their computational economy and accuracy on a typical recirculating flow situation. The numerical experiments show that the alternating direction line iterative method is the most economical method of solving the Navier-Stokes equations for all Reynolds numbers in the laminar regime. The usual ADI method is shown to be not so attractive for large Reynolds numbers because of the loss of diagonal dominance. This loss can however be restored by a suitable choice of the relaxation parameter, but at the cost of accuracy. The accuracy of the new procedure is comparable to that of the well-tested successive overrelaxation method and to the available results in the literature. The second-order procedure turns out to be the most efficient method for the solution of the linear energy equation.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.