885 resultados para Fixed-priority scheduling
Resumo:
This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.
Resumo:
In this study it was evaluated the start-up procedures of anaerobic treatment system with three horizontal anaerobic reactors (R1, R2 and R3), installed in series, with volume of 1.2 L each. R1 had sludge blanket, and R2 and R3 had half supporter of bamboo and coconut fiber, respectively. As an affluent, it was synthesized wastewater from mechanical pulping of the coffee fruit by wet method, with a mean value of total chemical oxygen demand (CODtotal) of 16,003 mg L-1. The hydraulic retention time (HRT) in each reactor was 30 h. The volumetric organic loading (VOL) applied in R1 varied from 8.9 to 25.0 g of CODtotal (L d)-1. The mean removal efficiencies of CODtotal varied from 43 to 97% in the treatment system (R1+R2+R3), stabilizing above 80% after 30 days of operation. The mean content of methane in the biogas were of 70 to 76%, the mean volumetric production was 1.7 L CH4 (L reactor d)-1 in the system, and the higher conversions were around at 0.20 L CH4 (g CODremoved)-1 in R1 and R2. The mean values of pH in the effluents ranged from 6.8 to 8.3 and the mean values of total volatile acids remained below 200 mg L-1 in the effluent of R3. The concentrations of total phenols of the affluent ranged from 45 to 278 mg L-1, and the mean removal efficiency was of 52%. The start-up of the anaerobic treatment system occurred after 30 days of operation as a result of inoculation with anaerobic sludge with active microbiota.
Resumo:
Attempting to associate waste treatment to the production of clean and renewable energy, this research sought to evaluate the biological production of hydrogen using wastewater from the cassava starch treatment industry, generated during the processes of extraction and purification of starch. This experiment was carried out in a continuous anaerobic reactor with a working volume of 3L, with bamboo stems as the support medium. The system was operated at a temperature of 36°C, an initial pH of 6.0 and under variations of organic load. The highest rate of hydrogen production, of 1.1 L.d-1.L-1, was obtained with application of an organic loading rate of 35 g.L-1.d-1, in terms of total sugar content and hydraulic retention time of 3h, with a prevalence of butyric and acetic acids as final products of the fermentation process. Low C/N ratios contributed to the excessive growth of the biomass, causing a reduction of up to 35% in hydrogen production, low percentages of H2 and high concentrations of CO2in the biogas.
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.
Resumo:
The diagnosis of Mycoplasma hyopneumoniae infection is often performed through histopathology, immunohistochemistry (IHC) and polymerase chain reaction (PCR) or a combination of these techniques. PCR can be performed on samples using several conservation methods, including swabs, frozen tissue or formalin-fixed and paraffin-embedded (FFPE) tissue. However, the formalin fixation process often inhibits DNA amplification. To evaluate whether M. hyopneumoniae DNA could be recovered from FFPE tissues, 15 lungs with cranioventral consolidation lesions were collected in a slaughterhouse from swine bred in herds with respiratory disease. Bronchial swabs and fresh lung tissue were collected, and a fragment of the corresponding lung section was placed in neutral buffered formalin for 48 hours. A PCR assay was performed to compare FFPE tissue samples with samples that were only refrigerated (bronchial swabs) or frozen (tissue pieces). M. hyopneumoniae was detected by PCR in all 15 samples of the swab and frozen tissue, while it was detected in only 11 of the 15 FFPE samples. Histological features of M. hyopneumoniae infection were presented in 11 cases and 7 of these samples stained positive in IHC. Concordance between the histological features and detection results was observed in 13 of the FFPE tissue samples. PCR was the most sensitive technique. Comparison of different sample conservation methods indicated that it is possible to detect M. hyopneumoniae from FFPE tissue. It is important to conduct further research using archived material because the efficiency of PCR could be compromised under these conditions.
Resumo:
This thesis comprises seven peer-reviewed articles and examines systems and applications suitable for increasing Future Force Warrior performance, minimizing collateral damage, improving situational awareness and Common Operational Picture. Based on a literature study, missing functionalities of Future Force Warrior were identified and new ideas, concepts and solutions were created as part of early stages of Systems of Systems creation. These introduced ideas have not yet been implemented or tested in combat and for this reason benefit analyses are excluded. The main results of this thesis include the following: A new networking concept, Wireless Polling Sensor Network, which is a swarm of a few Unmanned Aerial Vehicles forming an ad-hoc network and polling a large number of fixed sensor nodes. The system is more robust in a military environment than traditional Wireless Sensor Networks. A Business Process approach to Service Oriented Architecture in a tactical setting is a concept for scheduling and sharing limited resources. New components to military Service Oriented Architecture have been introduced in the thesis. Other results of the thesis include an investigation of the use of Free Space Optics in tactical communications, a proposal for tracking neutral forces, a system for upgrading simple collaboration tools for command, control and collaboration purposes, a three-level hierarchy of Future Force Warrior, and methods for reducing incidents of fratricide.
Resumo:
Detta arbete fokuserar på modellering av katalytiska gas-vätskereaktioner som genomförs i kontinuerliga packade bäddar. Katalyserade gas-vätskereaktioner hör till de mest typiska reaktionerna i kemisk industri; därför behandlas här packade bäddreaktorer som ett av de populäraste alternativen, då kontinuerlig drift eftersträvas. Tack vare en stor katalysatormängd per volym har de en kompakt struktur, separering av katalysatorn behövs inte och genom en professionell design kan den mest fördelaktiga strömningsbilden upprätthållas i reaktorn. Packade bäddreaktorer är attraktiva p.g.a. lägre investerings- och driftskostnader. Även om packade bäddar används intensivt i industri, är det mycket utmanande att modellera. Detta beror på att tre faser samexisterar och systemets geometri är komplicerad. Existensen av flera reaktioner gör den matematiska modelleringen även mera krävande. Många förenklingar blir därmed nödvändiga. Modellerna involverar typiskt flera parametrar som skall justeras på basis av experimentella data. I detta arbete studerades fem olika reaktionssystem. Systemen hade studerats experimentellt i vårt laboratorium med målet att nå en hög produktivitet och selektivitet genom ett optimalt val av katalysatorer och driftsbetingelser. Hydrering av citral, dekarboxylering av fettsyror, direkt syntes av väteperoxid samt hydrering av sockermonomererna glukos och arabinos användes som exempelsystem. Även om dessa system hade mycket gemensamt, hade de också unika egenskaper och krävde därför en skräddarsydd matematisk behandling. Citralhydrering var ett system med en dominerande huvudreaktion som producerar citronellal och citronellol som huvudprodukter. Produkterna används som en citrondoftande komponent i parfymer, tvålar och tvättmedel samt som plattform-kemikalier. Dekarboxylering av stearinsyra var ett specialfall, för vilket en reaktionsväg för produktion av långkedjade kolväten utgående från fettsyror söktes. En synnerligen hög produktselektivitet var karakteristisk för detta system. Även processuppskalning modellerades för dekarboxylerings-reaktionen. Direkt syntes av väteperoxid hade som målsättning att framta en förenklad process att producera väteperoxid genom att låta upplöst väte och syre reagera direkt i ett lämpligt lösningsmedel på en aktiv fast katalysator. I detta system förekommer tre bireaktioner, vilka ger vatten som oönskad produkt. Alla dessa tre reaktioner modellerades matematiskt med hjälp av dynamiska massbalanser. Målet med hydrering av glukos och arabinos är att framställa produkter med en hög förädlingsgrad, nämligen sockeralkoholer, genom katalytisk hydrering. För dessa två system löstes ämnesmängd- och energibalanserna simultant för att evaluera effekter inne i porösa katalysatorpartiklar. Impulsbalanser som bestämmer strömningsbetingelser inne i en kemisk reaktor, ersattes i alla modelleringsstudier med semi-empiriska korrelationsuttryck för vätskans volymandel och tryckförlust och med axiell dispersionsmodell för beskrivning av omblandningseffekter. Genom att justera modellens parametrar kunde reaktorns beteende beskrivas väl. Alla experiment var genomförda i laboratorieskala. En stor mängd av kopplade effekter samexisterade: reaktionskinetik inklusive adsorption, katalysatordeaktivering, mass- och värmeöverföring samt strömningsrelaterade effekter. En del av dessa effekter kunde studeras separat (t.ex. dispersionseffekter och bireaktioner). Inverkan av vissa fenomen kunde ibland minimeras genom en noggrann planering av experimenten. På detta sätt kunde förenklingar i modellerna bättre motiveras. Alla system som studerades var industriellt relevanta. Utveckling av nya, förenklade produktionsteknologier för existerande kemiska komponenter eller nya komponenter är ett gigantiskt uppdrag. Studierna som presenterades här fokuserade på en av den teknisk-vetenskapliga utfärdens första etapper.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The acid mining drainage is considered the most significant environmental pollution problem around the world for the extensive formation acidic leachates containing heavy metals. Adsorption is widely used methods in water treatment due to it easy operation and the availability of a wide variety of commercial adsorbent (low cost). The primary goal of this thesis was to investigate the efficiency of neutralizing agents, CaCO3 and CaSiO3, and metal adsorption materials with unmodified limestone from Company Nordkalk Oy. In addition to this, the side materials of limestone mining were tested for iron adsorption from acidic model solution. This study was executed at Lappeenranta University of Technology, Finland. The work utilised fixed-bed adsorption column as the main equipment and large fluidized column. Atomic absorption spectroscopy (AAS) and x-ray diffraction (XRD) was used to determine ferric removal and the composition of material respectively. The results suggest a high potential for the studied materials to be used a low cost adsorbents in acid mine drainage treatment. From the two studied adsorbents, the FS material was more suitable than the Gotland material. Based on the findings, it is recommended that further studies might include detailed analysis of Gotland materials.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Fiber-reinforced composite fixed dental prostheses – Studies of the materials used as pontics University of Turku, Faculty of Medicine, Institute of Dentistry, Department of Biomaterials Science, Finnish Doctoral Program in Oral Sciences – FINDOS, Annales Universitatis Turkuensis, Turku, Finland 2015 Fiber-reinforced composites (FRC), a non-metallic biomaterial, represent a suitable alternative in prosthetic dentistry when used as a component of fixed dental prostheses (FDPs). Some drawbacks have been identified in the clinical performance of FRC restorations, such as delamination of the veneering material and fracture of the pontic. Therefore, the current series of studies were performed to investigate the possibilities of enhancing the mechanical and physical properties of FRC FDPs by improving the materials used as pontics, to then heighten their longevity. Four experiments showed the importance of the pontic design and surface treatment in the performance of FRC FDPs. In the first, the load-bearing capacities of inlay-retained FRC FDPs with pontics of various materials and thicknesses were evaluated. Three different pontic materials were assessed with different FRC framework vertical positioning. Thicker pontics showed increased load-bearing capacities, especially ceramic pontics. A second study was completed investigating the influence of the chemical conditioning of the ridge-lap surface of acrylic resin denture teeth on their bonding to a composite resin. Increased shear bond strength demonstrated the positive influence of the pretreatment of the acrylic surfaces, indicating dissolution of the denture surfaces, and suggesting potential penetration of the monomer systems into the surface of denture teeth. A third study analyzed the penetration depth of different monomer systems on the acrylic resin denture teeth surfaces. The possibility of establishing a durable bond between acrylic pontics and FRC frameworks was demonstrated by the ability of monomers to penetrate the surface of acrylic resin denture teeth, measured by a confocal scanning type microscope. A fourth study was designed to evaluate the load-bearing capacities of FRC FDPs using the findings of the previous three studies. In this case, the performance of pre-shaped acrylic resin denture teeth used as pontics with different composite resins as filling materials was evaluated. The filling material influenced the load-bearing capacities, providing more durable FRC FDPs. It can be concluded that the mechanical and physical properties of FRC FDPs can be improved as has been shown in the development of this thesis. The improvements reported then might provide long lasting prosthetic solutions of this kind, positioning them as potentially permanent rehabilitation treatments. Key words: fiber-reinforced composite, fixed dental prostheses, inlay-retained bridges, adhesion, acrylic resin denture teeth, dental material.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
Operational excellence of individual tramp shipping companies is important in today’s market, where competition is intense, freight revenues are modest and capital costs high due to global financial crisis, and tighter regulatory framework is generating additional costs and challenges to the industry. This thesis concentrates on tramp shipping, where a tramp operator in a form of an individual case company, specialized in short-sea shipping activities in the Baltic Sea region, is searching ways to map their current fleet operations and better understand potential ways to improve the overall routing and scheduling decisions. The research problem is related to tramp fleet planning where several cargoes are carried on board at the same time, which are here systematically referred to as part cargoes. The purpose is to determine the pivotal dimensions and characteristics of these part cargo operations in tramp shipping, and offer both the individual case company and wider research community better understanding of potential risks and benefits related to utilization of part cargo operations. A mixed method research approach is utilized in this research, as the objectives are related to complex, real-life business practices in the field of supply chain management and more specifically, maritime logistics. A quantitative analysis of different voyage scenarios is executed, including alternative voyage legs with varying cost structure and customer involvement. An on-line-based questionnaire designed and prepared by case company’s decision group again provides desired data of predominant attitudes and views of most important industrial customers regarding the part cargo-related operations and potential future utilization of this business model. The results gained from these quantitative methods are complied with qualitative data collection tools, along with suitable secondary data sources. Based on results and logical analysis of different data sources, a framework for characterizing the different aspects of part cargo operations is developed, utilizing both existing research and empirical investigation of the phenomenon. As conclusions, part cargoes have the ability to be part of viable fleet operations, and even increase flexibility among the fleet to a certain extent. Naturally, several hinderers for this development is recognized as well, such as potential issues with information gathering and sharing, inefficient port activities, and increased transit times.