19 resultados para Dual priority scheduling

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen päätavoite on arvioida, ovatko neljä ohjelmistovaihtoehtoa riittäviä tuotannon aikataulutuksen työkaluja ja mikä työkaluista sopii toimeksiantajayritykselle. Alatavoitteena on kuvata tuotannon aikataulutuksen nyky- ja tahtotila prosessimallinnuksen avulla, selvittää työkalun käyttäjätarpeet ja määritellä priorisoidut valintakriteerit työkalulle.Tutkimuksen teoriaosuudessa tutkitaan tuotannon aikataulutuksen logiikkaa ja haasteita. Työssä tarkastellaan aikataulutusohjelmiston valintaa rinnakkain prosessinmallinnuksen kanssa. Aikataulutusohjelmistovaihtoehdot ja metodit käyttäjätarpeiden selvittämiseksi käydään läpi. Empiriaosuudessa selvitetään tutkimuksen suhde toimeksiantajayrityksen strategiaan. Käyttäjätarpeet selvitetään haastattelujen avulla jaanalysoidaan QFD matriisin avulla. Toimeksiantajayrityksen tuotannon aikataulutuksen nyky- ja tahtotilaprosessit mallinnetaan, jotta ohjelmistojen sopivuutta, aikataulutusprosessia tukevana työkaluna voidaan arvioida.Tutkimustuloksena ovatpriorisoidut valintakriteerit aikataulutustyökalulle eli käyttäjätarpeista johdetut tärkeimmät toiminnalliset ominaisuudet, järjestelmätoimittaja-arvio sekä suositukset jatkotoimenpiteistä ja lisätutkimuksesta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In modem hitec industry Advanced Planning and Scheduling (APS) systems provide the basis for e-business solutions towards the suppliers and the customers. One objective of this thesis was to clarify the modem supply chain management with the APS systems and especially concentrate on the area of Collaborative Planning. In order Advanced Planning and Scheduling systems to be complete and usable, user interfaces are needed. Current Visual Basic user interfaces have faced many complaints and arguments from the users as well as from the development team. This thesis is trying to analyze the reasons and causes for the encountered problems and also provide ways to overcome them. The decision has been made to build the new user interfaces to be Web-enabled. Therefore another objective of this thesis was to research and find suitable technologies for building the Web-based user interfaces for Advanced Planning and Scheduling Systems in Nokia Demand/Supply Planning business area. Comparison between the most suitable technologies is made. Usability issues of Web-enabled user interfaces are also covered. The empirical part of the thesis includes design and implementation of a Web-based user interface with the chosen technology for a particular APS module that enables Collaborative Planning with suppliers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis I argue that the psychological study of concepts and categorisation, and the philosophical study of reference are deeply intertwined. I propose that semantic intuitions are a variety of categorisation judgements, determined by concepts, and that because of this, concepts determine reference. I defend a dual theory of natural kind concepts, according to which natural kind concepts have distinct semantic cores and non-semantic identification procedures. Drawing on psychological essentialism, I suggest that the cores consist of externalistic placeholder essence beliefs. The identification procedures, in turn, consist of prototypes, sets of exemplars, or possibly also theory-structured beliefs. I argue that the dual theory is motivated both by experimental data and theoretical considerations. The thesis consists of three interrelated articles. Article I examines philosophical causal and description theories of natural kind term reference, and argues that they involve, or need to involve, certain psychological elements. I propose a unified theory of natural kind term reference, built on the psychology of concepts. Article II presents two semantic adaptations of psychological essentialism, one of which is a strict externalistic Kripkean-Putnamian theory, while the other is a hybrid account, according to which natural kind terms are ambiguous between internalistic and externalistic senses. We present two experiments, the results of which support the strict externalistic theory. Article III examines Fodor’s influential atomistic theory of concepts, according to which no psychological capacities associated with concepts constitute them, or are necessary for reference. I argue, contra Fodor, that the psychological mechanisms are necessary for reference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this Thesis is to study how to manage the front-end of the offering planning process. This includes actual process development and methods to gather and analyze information to achieve the best outcome in customer oriented product offering. Study is carried out in two parts: theoretical part and company related part. Theoretical framework is created introducing different types of approaches to manage product planning processes. Products are seen as platforms and they are broken down to subsystems to show different parts of the development. With the help of the matrix-based approaches product platform related information is gathered and analyzed. In this kind of analysis business/market drivers and cus-tomer/competitor information are connected with product subsystems. This gives possibilities to study product gaps/needs and possible future ideas/scenarios in different customer segments. Company related part consists of offering planning process development in real company environment. Process formation includes documents and tools that guide planning from the information gathering to the prioritization and decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this study was to find a new approach to modify chemically the properties of paper by improving fiber quality. This Master’s thesis includes the multiple polymer treatment in general and themeasurement methods with which the formation of multilayers and complexes can be noticed. The treatment by an oppositely charged dual polymer system is a good approach to increase paper strength. In this work, starch, a cationic polymer, and carboxymethyl cellulose (CMC), an anionic polymer, were used step-by-step to improve paper strength. The adsorption of cationic starch and CMC on cellulose fibers were analyzed via polyelectrolyte titration. The results showed that paper strength was enhanced slightly with a layer-by-layer assembly of the polymers. However, if the washing stage, which was required for layer-by-layer assembly, was eliminated, the starch/CMC complex was deposited on fibers more efficiently, and the paper strength was improved more significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is to research Manufacturing Planning and Control (MPC) system and Master Scheduling (MS) in a manufacturing firm. The study is conducted at Ensto Finland Corporation, which operates on a field of electrical systems and supplies. The paper consists of theoretical and empirical parts. The empirical part is based on weekly operating at Ensto and includes inter-firm material analysis, learning and meetings. Master Scheduling is an important module of an MPC system, since it is beneficial on transforming strategic production plans based on demand forecasting into operational schedules. Furthermore, capacity planning tools can remarkably contribute to production planning: by Rough-Cut Capacity Planning (RCCP) tool, a MS plan can be critically analyzed in terms of available key resources in real manufacturing environment. Currently, there are remarkable inefficiencies when it comes to Ensto’s practices: the system is not able to take into consideration seasonal demand and react on market changes on time; This can cause significant lost sales. However, these inefficiencies could be eliminated through the appropriate utilization of MS and RCCP tools. To utilize MS and RCCP tools in Ensto’s production environment, further testing in real production environment is required. Moreover, data accuracy, appropriate commitment to adapting and learning the new tools, and continuous developing of functions closely related to MS, such as sales forecasting, need to be ensured.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most advanced tumours face periods of reduced oxygen availability i.e. hypoxia. During these periods tumour cells undergo adaptive changes enabling their survival under adverse conditions. In cancer hypoxia-induced cellular changes cause tumour progression, hinder cancer treatment and are indicative of poor prognosis. Within cells the main regulator of hypoxic responses is the hypoxia-inducible factor (HIF). HIF governs the expression of over a hundred hypoxia-inducible genes that regulate a number of cellular functions such as angiogenesis, glucose metabolism and cell migration. Therefore the activity of HIF must be tightly governed. HIF is regulated by a family of prolyl hydroxylase enzymes, PHDs, which mark HIF for destruction in normoxia. Under hypoxic conditions PHDs lose much of their enzymatic activity as they need molecular oxygen as a cofactor. Out of the three PHDs (PHD1, 2 and 3) PHD2 has been considered to be the main HIF-1 regulator in normoxic conditions. PHD3 on the other hand shows the most robust induction in response to oxygen deprivation and it has been implied as the main HIF-1 regulator under prolonged hypoxia. SQSTM1/p62 (p62) is an adaptor protein that functions through its binding motifs to bring together proteins in order to regulate signal transduction. In non-stressed situations p62 levels are kept low but its expression has been reported to be upregulated in many cancers. It has a definitive role as an autophagy receptor and as such it serves a key function in cancer cell survival decisions. In my thesis work I evaluated the significance of PHD3 in cancer cell and tumour biology. My results revealed that PHD3 has a dual role in cancer cell fate. First, I demonstrated that PHD3 forms subcellular protein aggregates in oxygenated carcinoma cells and that this aggregation promotes apoptosis induction in a subset of cancer cells. In these aggregates an adaptor protein SQSTM1/p62 interacts with PHD3 and in so doing regulates PHD3 expression. SQSTM1/p62 expression is needed to keep PHD3 levels low in normoxic conditions. Its levels rapidly decrease in response to hypoxia allowing PHD3 protein levels to be upregulated and the protein to be diffusely expressed throughout the cell. The interaction between PHD3 and SQSTM1/p62 limits the ability of PHD3 to function on its hydroxylation target protein HIF-1alpha. Second, the results indicate that when PHD3 is upregulated under hypoxia it protects cancer cells by allowing cell cycle to proceed from G1 to S-phase. My data demonstrates that PHD3 may either cause cell death or protect the cells depending on its expression pattern and the oxygen availability of tumours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.