947 resultados para Transport network optimization
Resumo:
Inom nära framtid kan det vara ekonomiskt lönsamt att investera i träbaserad biomassa-användning inom stålproduktion. En förutsättning är att biomassans bränsle-egenskaper förbättras genom en sorts förkolning, närmare sagt långsam pyrolysering. Som biomassakälla duger skogsrester och dylikt. Nyttan med att använda biomassa är att man reducerar fossila CO2-utsläpp och på så vis minskar på den globala uppvärmningen; detta kräver dock att återplantering görs för att fånga de frigjorda CO2-utsläppen som härstammar från biomassans skörd, transport, processering och förbränning. En investering i en pyrolyseringsanläggning integrerad i ett stålverk kan vara lönsam ifall utsläppskatten är över 20 € per ton CO2 då biomassakostnaden är 40 € per ton torr substans. Detta är dock inte fallet i dagens läge och i Finland är dessa avgifter beroende av politiska beslut på Europeisk nivå. Det kunde dock vara av politiskt intresse att stödja biomassa-användningen inom stålindustrin eftersom detta skulle skapa nya arbetsplatser såväl inom själva stålindustrin som inom skogsindustrin och möjligen även inom kemi-industrin, beroende på hur de resulterande pyrolysprodukterna (träkol, gas och bio-oljor) utnyttjas. Med tanke på att närmare en femtedel av alla CO2-utsläpp härstammande från industrin kommer från stålindustrin så är det uppenbart att mera miljövänligare alternativ kommer att krävas i framtiden. Lähitulevaisuudessa voi olla taloudellisesti kannattavaa käyttää puuperäistä biomassaa teräksen valmistuksessa. Tämä vaatii kuitenkin sen, että biomassa läpikäy eräänlaisen hiillostuksen, eli pyrolyysin, ja tässä tapauksessa, hitaan pyrolyysin. Biomassalähteeksi kelpaavat esimerkiksi hakkuujätteet ja muu puujäte. Hyöty biomassankäytöstä syntyy siitä, kun se osittain korvaa perinteisesti käytettyjä fossiilisia polttoaineita, jolloin fossiiliset hiilidioksidipäästöt vähenevät ja tällöin myös vaikutus ilmaston lämpenemiseen vähenee. Edellytyksenä on tietysti se, että uusia puita ja kasveja istutetaan, jotta biomassan keräyksestä, kuljetuksesta, käsittelystä sekä poltosta vapautunut hiilidioksidi saadaan jälleen talteen. Arvion mukaan pyrolyysiyksikön integroiminen terästehtaaseen tulee taloudellisesti kannattavaksi, kun hiilidioksidivero nousee yli 20 € per tonni, kun biomassan hinnaksi on arvioitu 40 € per tonni kuiva-ainesta. Kyseinen hintataso ei ole tällä hetkellä voimassa ja Suomen tilanne tässä asiassa on vahvasti kytköksissä Euroopan tasolla tehtäviin päätöksiin. Aihe voisi kuitenkin olla poliittisella tasolla kiinnostava ja taloudellisen tuen arvoista, koska biomassan käyttö loisi lisää työpaikkoja niin terästeollisuudessa kuin puuteollisuudessakin ja mahdollisesti myös kemianteollisuudessa, riippuen siitä miten erinäisiä pyrolyysituotteita (puuhiili, kaasu ja bio-öljy) hyödynnettäisiin. Ottaen huomioon, että terästeollisuus maailmanlaajuisesti vastaa noin viidesosasta kaikista teollisuudesta peräisin olevista hiilidioksidipäästöistä on päivänselvää, että ympäristöystävällisemmille vaihtoehdoille on tarvetta tulevaisuudessa.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Cette thèse est une contribution à la modélisation, la planification et l’optimisation du transport pour l’approvisionnement en bois de forêt des industries de première transformation. Dans ce domaine, les aléas climatiques (mise au sol des bois par les tempêtes), sanitaires (attaques bactériologiques et fongiques des bois) et commerciaux (variabilité et exigence croissante des marchés) poussent les divers acteurs du secteur (entrepreneurs et exploitants forestiers, transporteurs) à revoir l’organisation de la filière logistique d’approvisionnement, afin d’améliorer la qualité de service (adéquation offre-demande) et de diminuer les coûts. L’objectif principal de cette thèse était de proposer un modèle de pilotage améliorant la performance du transport forestier, en respectant les contraintes et les pratiques du secteur. Les résultats établissent une démarche de planification hiérarchique des activités de transport à deux niveaux de décision, tactique et opérationnel. Au niveau tactique, une optimisation multi-périodes permet de répondre aux commandes en minimisant l’activité globale de transport, sous contrainte de capacité agrégée des moyens de transport accessibles. Ce niveau permet de mettre en œuvre des politiques de lissage de charge et d’organisation de sous-traitance ou de partenariats entre acteurs de transport. Au niveau opérationnel, les plans tactiques alloués à chaque transporteur sont désagrégés, pour permettre une optimisation des tournées des flottes, sous contrainte des capacités physiques de ces flottes. Les modèles d’optimisation de chaque niveau sont formalisés en programmation linéaire mixte avec variables binaires. L’applicabilité des modèles a été testée en utilisant un jeu de données industrielles en région Aquitaine et a montré des améliorations significatives d’exploitation des capacités de transport par rapport aux pratiques actuelles. Les modèles de décision ont été conçus pour s’adapter à tout contexte organisationnel, partenarial ou non : la production du plan tactique possède un caractère générique sans présomption de l’organisation, celle-ci étant prise en compte, dans un deuxième temps, au niveau de l’optimisation opérationnelle du plan de transport de chaque acteur.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
Permeability of a rock is a dynamic property that varies spatially and temporally. Fractures provide the most efficient channels for fluid flow and thus directly contribute to the permeability of the system. Fractures usually form as a result of a combination of tectonic stresses, gravity (i.e. lithostatic pressure) and fluid pressures. High pressure gradients alone can cause fracturing, the process which is termed as hydrofracturing that can determine caprock (seal) stability or reservoir integrity. Fluids also transport mass and heat, and are responsible for the formation of veins by precipitating minerals within open fractures. Veining (healing) thus directly influences the rock’s permeability. Upon deformation these closed factures (veins) can refracture and the cycle starts again. This fracturing-healing-refacturing cycle is a fundamental part in studying the deformation dynamics and permeability evolution of rock systems. This is generally accompanied by fracture network characterization focusing on network topology that determines network connectivity. Fracture characterization allows to acquire quantitative and qualitative data on fractures and forms an important part of reservoir modeling. This thesis highlights the importance of fracture-healing and veins’ mechanical properties on the deformation dynamics. It shows that permeability varies spatially and temporally, and that healed systems (veined rocks) should not be treated as fractured systems (rocks without veins). Field observations also demonstrate the influence of contrasting mechanical properties, in addition to the complexities of vein microstructures that can form in low-porosity and permeability layered sequences. The thesis also presents graph theory as a characterization method to obtain statistical measures on evolving network connectivity. It also proposes what measures a good reservoir should have to exhibit potentially large permeability and robustness against healing. The results presented in the thesis can have applications for hydrocarbon and geothermal reservoir exploration, mining industry, underground waste disposal, CO2 injection or groundwater modeling.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Three types of phospholipases, phospholipase D, secreted phospholipase A2, and patatin-related phospholipase A (pPLA) have functions in auxin signal transduction. Potential linkage to auxin receptors ABP1 or TIR1, their rapid activation or post-translational activation mechanisms, and downstream functions regulated by these phospholipases is reviewed and discussed. Only for pPLA all aspects are known at least to some detail. Evidence is gathered that all these signal reactions are located in the cytosol and seem to merge on regulation of PIN-catalyzed auxin efflux transport proteins. As a consequence, auxin concentration in the nucleus is also affected and this regulates the E3 activity of this auxin receptor. We showed that ABP1, PIN2, and pPLA, all outside the nucleus, have an impact on regulation of auxin-induced genes within 30 min. We propose that regulation of PIN protein activities and of auxin efflux transport are the means to coordinate ABP1 and TIR1 activity and that no physical contact between components of the ABP1-triggered cytosolic pathways and TIR1-triggered nuclear pathways of signaling is necessary to perform this.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Activity of 7-ethoxyresorufin-O-deethylase (EROD) in fish is certainly the best-studied biomarker of exposure applied in the field to evaluate biological effects of contamination in the marine environment. Since 1991, a feasibility study for a monitoring network using this biomarker of exposure has been conducted along French coasts. Using data obtained during several cruises, this study aims to determine the number of fish required to detect a given difference between 2 mean EROD activities, i.e. to achieve an a priori fixed statistical power (l-beta) given significance level (alpha), variance estimations and projected ratio of unequal sample sizes (k). Mean EROD activity and standard error were estimated at each of 82 sampling stations. The inter-individual variance component was dominant in estimating the variance of mean EROD activity. Influences of alpha, beta, k and variability on sample sizes are illustrated and discussed in terms of costs. In particular, sample sizes do not have to be equal, especially if such a requirement would lead to a significant cost in sampling extra material. Finally, the feasibility of longterm monitoring is discussed.
Resumo:
The majority of research work carried out in the field of Operations-Research uses methods and algorithms to optimize the pick-up and delivery problem. Most studies aim to solve the vehicle routing problem, to accommodate optimum delivery orders, vehicles etc. This paper focuses on green logistics approach, where existing Public Transport infrastructure capability of a city is used for the delivery of small and medium sized packaged goods thus, helping improve the situation of urban congestion and greenhouse gas emissions reduction. It carried out a study to investigate the feasibility of the proposed multi-agent based simulation model, for efficiency of cost, time and energy consumption. Multimodal Dijkstra Shortest Path algorithm and Nested Monte Carlo Search have been employed for a two-phase algorithmic approach used for generation of time based cost matrix. The quality of the tour is dependent on the efficiency of the search algorithm implemented for plan generation and route planning. The results reveal a definite advantage of using Public Transportation over existing delivery approaches in terms of energy efficiency.
Resumo:
Esta dissertação pretende contribuir para um melhor conhecimento da complexidade das redes de transferência de conhecimentos e técnicas, no domínio da engenharia civil e mais concretamente através dos caminhos-de-ferro, nos séculos XIX e XX. Em Portugal, os caminhos-de-ferro estiveram no cerne de um vasto debate, sobretudo político, concomitante com uma instabilidade crescente no cenário político e uma fase de fragilidade económica. É neste contexto que a Linha do Sul e Sueste vai ser construída (seguida pela sua extensão até Vila Real de Santo António e pela construção do ramal de Portimão, que chegará a Lagos). Este empreendimento é uma clara ilustração da realidade portuguesa de então, no que concerne ao desenvolvimento desta rede de transportes, que nos permite, igualmente, conhecer e compreender quem interveio no processo de construção da linha (os engenheiros, as empresas, entre outros aspectos) e assim determinar quais as influências e transferências técnicas que tiveram lugar; RESUMEE: Cette mémoire attire à la contribution pour une meilleure connaissance de la complexité des réseaux de transfert de techniques et connaissances qui ont eu lieu dans le domaine de l’ingénierie civile, surtout dans les chemins de fer, au XIXème et XXème siècles. Au Portugal, les chemins de fer sont été le cerne d’un très vaste débat, coïncidant avec une croissante instabilité dans le scenario politique et aussi une phase économique fragile. C’est dans ce contexte que la Ligne du Sud et Sud-est va être bâti (suivi par l’extension jusqu’à Vila Real de Santo António et la construction de l’embranchement ferroviaire Portimão). Cette entreprise c’est une illustration claire de la réalité portugaise, en concernant l’implémentation de cette réseau de transport, que nous permettre de comprendre et également bien connaitre qui a intervenu dans le processus de construction de la ligne (les ingénieurs, entreprises, etcetera), ainsi que déterminer les influences et les transferts techniques qui ont eu lieu; ABSTRACT: With this master’s thesis, the aim is to be able to contribute to a better understanding of the complex network of technique’s and knowledge transfers, that took place within the field of civil engineering, in the 19th and 20th centuries, namely on the railways. In Portugal, railways take-up was a wide and ample debate, coinciding with an uprising turmoil on the Portuguese political outskirt and a phase of economic frailty. It’s in this context that the construction of the South and Southeast Line took place (followed, later on, by its extension until Vila Real de Santo António and by the construction of the Portimão’s branch). This enterprise is, as we pretend to prove in this master’s thesis, a clear example of the Portuguese reality, enabling us to understand and to get to know those who intervened in the construction’s process (the engineers and the companies) as well as determining influences and technique transfers that have taken place.