22 resultados para Plant and Equipment-resource Allocation
Resumo:
The strategy process is a method for strategy formulation and implementation. The strategy process is commonly used especially within bigger companies. It is important to link the strategy formulation and implementation. The objective of this thesis has been to find out improvement areas for the case company’s strategy process. The theoretical framework based on literature emphasizes on strategy process as a method for strategy formulation and implementation. The theoretical framework, several mainly ad hoc interviews and author’s observation were used as tools to analyze the case company’s strategy process. The hierarchy in between the various corporate levels provides the foundation to formulate and implement the strategies. These strategies include the corporate and strategic business area level strategies. The recommendations to improve the case company’s strategy process were formulated at corporate and strategic business area levels. These recommendations were formulated based on research and experience gained throughout the work. The role of strategic projects to implement the strategies more efficiently and organizational control over distribution were found as potential improvement areas. The resource allocation prioritizing towards the most important strategic projects was also an important improvement area.
Resumo:
COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.
Resumo:
Hoitajien informaatioteknologian hyväksyntä ja käyttö psykiatrisissa sairaaloissa Informaatioteknologian (IT) käyttö ei ole ollut kovin merkittävässä roolissa psykiatrisessa hoitotyössä, vaikka IT sovellusten on todettu vaikuttaneen radikaalisti terveydenhuollon palveluihin ja hoitohenkilökunnan työprosesseihin viime vuosina. Tämän tutkimuksen tavoitteena on kuvata psykiatrisessa hoitotyössä toimivan hoitohenkilökunnan informaatioteknologian hyväksyntää ja käyttöä ja luoda suositus, jonka avulla on mahdollista tukea näitä asioita psykiatrisissa sairaaloissa. Tutkimus koostuu viidestä osatutkimuksesta, joissa on hyödynnetty sekä tilastollisia että laadullisia tutkimusmetodeja. Tutkimusaineistot on kerätty yhdeksän akuuttipsykiatrian osaston hoitohenkilökunnan keskuudessa vuosien 2003-2006 aikana. Technology Acceptance Model (TAM) –teoriaa on hyödynnetty jäsentämään tutkimusprosessia sekä syventämään ymmärrystä saaduista tutkimustuloksista. Tutkimus osoitti kahdeksan keskeistä tekijää, jotka saattavat tukea psykiatrisessa sairaalassa toimivien hoitajien tietoteknologiasovellusten hyväksyntää ja hyödyntämistä, kun nämä tekijät otetaan huomioon uusia sovelluksia käyttöönotettaessa. Tekijät jakautuivat kahteen ryhmään; ulkoiset tekijät (resurssien suuntaaminen, yhteistyö, tietokonetaidot, IT koulutus, sovelluksen käyttöön liittyvä harjoittelu, potilas-hoitaja suhde), sekä käytön helppous ja sovelluksen käytettävyys (käytön ohjeistus, käytettävyyden varmistaminen). TAM teoria todettiin käyttökelpoiseksi tulosten tulkinnassa. Kehitetty suositus sisältää ne toimenpiteet, joiden avulla on mahdollista tukea sekä organisaation johdon että hoitohenkilökunnan sitoutumista ja tätä kautta varmistaa uuden sovelluksen hyväksyntä ja käyttö hoitotyössä. Suositusta on mahdollista hyödyntää käytännössä kun uusia tietojärjestelmiä implementoidaan käyttöön psykiatrisissa sairaaloissa.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
Työn tavoitteena oli löytää NCC:n (Nuclear Competence Center) toimintaympäristöön sopivat resurssien hallinnan ratkaisumallit työkaluineen. Tarkoituksena oli saada näkemys siitä mitä tulisi tehdä, jotta projektien resurssien hallintaa pystyttäisiin parantamaan. Työssä käydään läpi resurssisuunnittelua ja resurssien hallintaa kokonaisvaltaisesti. Tärkein keskittymisalue on kuitenkin resurssien hallinta moniprojektiympäristössä. Työssä on haastateltu projektipäälliköitä ja muita resurssiongelmien kanssa tekemisissä olevia, jotta saadaan kuva siitä, mikä resurssien hallinnan nykytilanne on ja miten sitä halutaan kehittää. Haastatteluiden pohjalta on tehty ehdotuksia resurssien hallinnan toteuttamiseksi. Tuloksista tärkeimmäksi voisi todeta sen, että resurssien hallinnan parantaminen ei ole helppoa ja se tulee vaatimaan monia toimintatapamuutoksia. Lisäksi tällä hetkellä resurssien hallintaa ei luultavasti ole mahdollista toteuttaa valmisohjelman avulla, joten ainakin toistaiseksi olisi järkevintä toteuttaa resurssien hallinta Excel-ratkaisun avulla.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.