2 resultados para Simplex. CPLEXR. Parallel Efficiency. Parallel Scalability. Linear Programming
em Helda - Digital Repository of University of Helsinki
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
Linear optimization model was used to calculate seven wood procurement scenarios for years 1990, 2000 and 2010. Productivity and cost functions for seven cutting, five terrain transport, three long distance transport and various work supervision and scaling methods were calculated from available work study reports. All method's base on Nordic cut to length system. Finland was divided in three parts for description of harvesting conditions. Twenty imaginary wood processing points and their wood procurement areas were created for these areas. The procurement systems, which consist of the harvesting conditions and work productivity functions, were described as a simulation model. In the LP-model the wood procurement system has to fulfil the volume and wood assortment requirements of processing points by minimizing the procurement cost. The model consists of 862 variables and 560 restrictions. Results show that it is economical to increase the mechanical work in harvesting. Cost increment alternatives effect only little on profitability of manual work. The areas of later thinnings and seed tree- and shelter wood cuttings increase on cost of first thinnings. In mechanized work one method, 10-tonne one grip harvester and forwarder, is gaining advantage among other methods. Working hours of forwarder are decreasing opposite to the harvester. There is only little need to increase the number of harvesters and trucks or their drivers from today's level. Quite large fluctuations in level of procurement and cost can be handled by constant number of machines, by alternating the number of season workers and by driving machines in two shifts. It is possible, if some environmental problems of large scale summer time harvesting can be solved.