991 resultados para Computational sciences
Resumo:
Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming
Resumo:
A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.
Resumo:
Most of the world’s languages lack electronic word form dictionaries. The linguists who gather such dictionaries could be helped with an efficient morphology workbench that adapts to different environments and uses. A widely usable workbench could be characterized, ideally, as generally applicable, extensible, and freely available (GEA). It seems that such a solution could be implemented in the framework of finite-state methods. The current work defines the GEA desiderata and starts a series of articles concerning these desiderata in finite- state morphology. Subsequent parts will review the state of the art and present an action plan toward creating a widely usable finite-state morphology workbench.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
The matched filter method for detecting a periodic structure on a surface hidden behind randomness is known to detect up to (r(0)/Lambda) gt;= 0.11, where r(0) is the coherence length of light on scattering from the rough part and 3 is the wavelength of the periodic part of the surface-the above limit being much lower than what is allowed by conventional detection methods. The primary goal of this technique is the detection and characterization of the periodic structure hidden behind randomness without the use of any complicated experimental or computational procedures. This paper examines this detection procedure for various values of the amplitude a of the periodic part beginning from a = 0 to small finite values of a. We thus address the importance of the following quantities: `(a)lambda) `, which scales the amplitude of the periodic part with the wavelength of light, and (r(0))Lambda),in determining the detectability of the intensity peaks.
Resumo:
Mass spectrometry (MS) became a standard tool for identifying metabolites in biological tissues, and metabolomics is slowly acknowledged as a legitimate research discipline for characterizing biological conditions. The computational analyses of metabolomics, however, lag behind compared with the rapid advances in analytical aspects for two reasons. First is the lack of standardized data repository for mass spectra: each research institution is flooded with gigabytes of mass-spectral data from its own analytical groups and cannot host a world-class repository for mass spectra. The second reason is the lack of informatics experts that are fully experienced with spectral analyses. The two barriers must be overcome to establish a publicly free data server for MS analysis in metabolomics as does GenBank in genomics and UniProt in proteomics. The workshop brought together bioinformaticians working on mass spectral analyses in Finland and Japan with the goal to establish a consortium to freely exchange and publicize mass spectra of metabolites measured on various platforms computational tools to analyze spectra spectral knowledge that are computationally predicted from standardized data. This book contains the abstracts of the presentations given in the workshop. The programme of the workshop consisted of oral presentations from Japan and Finland, invited lectures from Steffen Neumann (Leibniz Institute of Plant Biochemistry), Matej Oresic (VTT), Merja Penttila (VTT) and Nicola Zamboni (ETH Zurich) as well as free form discussion among the participants. The event was funded by Academy of Finland (grants 139203 and 118653), Japan Society for the Promotion of Science (JSPS Japan-Finland Bilateral Semi- nar Program 2010) and Department of Computer Science University of Helsinki. We would like to thank all the people contributing to the technical pro- gramme and the sponsors for making the workshop possible. Helsinki, October 2010 Masanori Arita, Markus Heinonen and Juho Rousu
Resumo:
We have presented an overview of the FSIG approach and related FSIG gram- mars to issues of very low complexity and parsing strategy. We ended up with serious optimism according to which most FSIG grammars could be decom- posed in a reasonable way and then processed efficiently.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
In the recent time CFD tools have become increasingly useful in the engineering design studies especially in the area of aerospace vehicles. This is largely due to the advent of high speed computing platforms in addition to the development of new efficient algorithms. The algorithms based on kinetic schemes have been shown to be very robust and further meshless methods offer certain advantages over the other methods. Preliminary investigations of blood flow visualization through artery using CFD tool have shown encouraging results which further needs to be verified and validated.
Resumo:
The static response of thin, wrinkled membranes is studied using both a tension field approximation based on plane stress conditions and a 3D nonlinear elasticityformulation, discretized through 8-noded Cosserat point elements. While the tension field approach only obtains the wrinkled/slack regions and at best a measure of the extent of wrinkliness, the 3D elasticity solution provides, in principle, the deformed shape of a wrinkled/slack membrane. However, since membranes barely resist compression, the discretized and linearized system equations via both the approaches are ill-conditioned and solutions could thus be sensitive to discretizations errors as well as other sources of noises/imperfections. We propose a regularized, pseudo-dynamical recursion scheme that provides a sequence of updates, which are almost insensitive to theregularizing term as well as the time step size used for integrating the pseudo-dynamical form. This is borne out through several numerical examples wherein the relative performance of the proposed recursion scheme vis-a-vis a regularized Newton strategy is compared. The pseudo-time marching strategy, when implemented using 3D Cosserat point elements, also provides a computationally cheaper, numerically accurate and simpler alternative to that using geometrically exact shell theories for computing large deformations of membranes in the presence of wrinkles. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The three-phase equilibrium between alloy, spinel solid solution and alpha -Al sub 2 O sub 3 in the Fe--Co--Al--O system at 1873k was fully characterized as a function of alloy composition using both experimental and computational methods. The equilibrium oxygen content of the liquid alloy was measured by suction sampling and inert gas fusion analysis. The O potential corresponding to the three-phase equilibrium was determined by emf measurements on a solid state galvanic cell incorporating (Y sub 2 O sub 3 )ThO sub 2 as the solid electrolyte and Cr + Cr sub 2 O sub 3 as the reference electrode. The equilibrium composition of the spinel phase formed at the interface between the alloy and alumina crucible was measured by electron probe microanalysis (EPMA). The experimental results were compared with the values computed using a thermodynamic model. The model used values for standard Gibbs energies of formation of pure end-member spinels and Gibbs energies of solution of gaseous O in liquid Fe and cobalt available in the literature. The activity--composition relationship in the spinel solid solution was computed using a cation distribution model. The variation of the activity coefficient of O with alloy composition in the Fe--Co--O system was estimated using both the quasichemical model of Jacob and Alcock and Wagner's model along with the correlations of Chiang and Chang and Kuo and Chang. The computed results of spinel composition and O potential are in excellent agreement with the experimental data. Graphs. 29 ref.--AA
Resumo:
Background:Bacterial non-coding small RNAs (sRNAs) have attracted considerable attention due to their ubiquitous nature and contribution to numerous cellular processes including survival, adaptation and pathogenesis. Existing computational approaches for identifying bacterial sRNAs demonstrate varying levels of success and there remains considerable room for improvement. Methodology/Principal Findings: Here we have proposed a transcriptional signal-based computational method to identify intergenic sRNA transcriptional units (TUs) in completely sequenced bacterial genomes. Our sRNAscanner tool uses position weight matrices derived from experimentally defined E. coli K-12 MG1655 sRNA promoter and rho-independent terminator signals to identify intergenic sRNA TUs through sliding window based genome scans. Analysis of genomes representative of twelve species suggested that sRNAscanner demonstrated equivalent sensitivity to sRNAPredict2, the best performing bioinformatics tool available presently. However, each algorithm yielded substantial numbers of known and uncharacterized hits that were unique to one or the other tool only. sRNAscanner identified 118 novel putative intergenic sRNA genes in Salmonella enterica Typhimurium LT2, none of which were flagged by sRNAPredict2. Candidate sRNA locations were compared with available deep sequencing libraries derived from Hfq-co-immunoprecipitated RNA purified from a second Typhimurium strain (Sittka et al. (2008) PLoS Genetics 4: e1000163). Sixteen potential novel sRNAs computationally predicted and detected in deep sequencing libraries were selected for experimental validation by Northern analysis using total RNA isolated from bacteria grown under eleven different growth conditions. RNA bands of expected sizes were detected in Northern blots for six of the examined candidates. Furthermore, the 5'-ends of these six Northern-supported sRNA candidates were successfully mapped using 5'-RACE analysis. Conclusions/Significance: We have developed, computationally examined and experimentally validated the sRNAscanner algorithm. Data derived from this study has successfully identified six novel S. Typhimurium sRNA genes. In addition, the computational specificity analysis we have undertaken suggests that similar to 40% of sRNAscanner hits with high cumulative sum of scores represent genuine, undiscovered sRNA genes. Collectively, these data strongly support the utility of sRNAscanner and offer a glimpse of its potential to reveal large numbers of sRNA genes that have to date defied identification. sRNAscanner is available from: http://bicmku.in:8081/sRNAscanner or http://cluster.physics.iisc.ernet.in/sRNAscanner/.
Resumo:
Processes in complex chemical systems, such as macromolecules, electrolytes, interfaces, micelles and enzymes, can span several orders of magnitude in length and time scales. The length and time scales of processes occurring over this broad time and space window are frequently coupled to give rise to the control necessary to ensure specificity and the uniqueness of the chemical phenomena. A combination of experimental, theoretical and computational techniques that can address a multiplicity of length and time scales is required in order to understand and predict structure and dynamics in such complex systems. This review highlights recent experimental developments that allow one to probe structure and dynamics at increasingly smaller length and time scales. The key theoretical approaches and computational strategies for integrating information across time-scales are discussed. The application of these ideas to understand phenomena in various areas, ranging from materials science to biology, is illustrated in the context of current developments in the areas of liquids and solvation, protein folding and aggregation and phase transitions, nucleation and self-assembly.