19 resultados para unit disk graphs
Resumo:
Abstract
Resumo:
Tutkimus tarkastelee vaihtoehtoisia termiinisuojaustrategioita metsäteollisuuden alan tulosyksikössä. Jälkitestauksen tarkoituksena on arvioida vaihtoehtoisten strategioiden tuloksellisuutta suojata case-yrityksen kassavirtoja seuraavan kolmen arviointikriteerin avulla: yksittäisten vieraan valuutan määräisten kassavirtojen vaihtelu; koko vieraan valuutan määräisen kassavirran vaihtelu; suojausvoitot ja -tappiot. Tutkimuksen teoreettinen viitekehys tarkastelee yrityksen päätöksentekoa, valuuttariskien suojausprosessia sekä esittelee yrityksen vaihtoehtoisia suojausstrategioita. Tutkimuksen empiirinen aineisto pohjautuu case- yrityksen historiallisiin myyntilukuihin ja on kerätty yrityksen tietojärjestelmästä. Muu tutkimuksessa käytetty dataon kerätty eri tietokannoista. Tutkimuksen tulokset osoittavat, että suojaaminen vähentää kassavirtojen vaihtelua. Suojaamisen taloudelliset tulokset ovat kuitenkin erittäin riippuvaisia valitusta suojausstrategiasta, joka voi johtaa merkittäviin suojausvoittoihin, mutta yhtä hyvin myos merkittäviin tappioihin. Johdon näkemykset ja riskitoleranssi määrittelevät mitä strategiaa yrityksessä tullaan viime kädessä noudattamaan.
Resumo:
Työssä tutkitaan telepäätelaitteen yli gigahertsin taajuisen säteilevän RF kentän sietoisuutta. Mittauksissa testattava laite on Tellabs Oy:n valmistaman CTU modeemin tuotekehitysversio. Teoriaosassa käydään läpi sähkömagneettisten aaltojen teoriaa, sekä säteilevän RF kentän aiheuttamien sähkömagneettiset häiriöiden syntymekanismeja. Myös säteilevien häiriöiden EMC mittauksiin tarvittavien mittalaitteiden tärkeimmät ominaisuudet esitellään, sekä pohditaan yli gigahertsin taajuuksille sopivien EMC mittalaitteiden vaatimuksia. EMC standardit eivät tällä hetkellä aseta vaatimuksia telelaitteiden RF kentän sietoisuudelle yli gigahertsin taajuudella. Tämän vuoksi työssä käsitellään myös todennäköisimpiä häiriölähteitä tällä taajuusalueella. Mittauksissa tutkittiin CTU:n RF kentän sietoisuutta taajuusalueella l - 4.2 GHz. Mittaukset suoritettiin sekä radiokaiuttomassa kammiossa että GTEM solussa. Myös metallisten lisäsuojien vaikutusta CTU:n kentänsietoisuuteen tutkittiin GTEM solussa.
Resumo:
The purpose of this Thesis was to study what is the present situation of Business Intelligence of the company unit. This means how efficiently unit uses possibilities of modern information management systems. The aim was to resolve how operative informa-tion management of unit’s tender process could be improved by modern information technology applications. This makes it possible that tender processes could be faster and more efficiency. At the beginning it was essential to acquaint oneself with written literature of Business Intelligence. Based on Business Intelligence theory is was relatively easy but challenging to search and discern how tender business could be improved by methods of Busi-ness Intelligence. The empirical phase of this study was executed as qualitative research method. This phase includes theme and natural interviews on the company. Problems and challenges of tender process were clarified in a part an empirical phase. Group of challenges were founded when studying information management of company unit. Based on theory and interviews, group of improvements were listed which company could possible do in the future when developing its operative processes.
Resumo:
The objective of this study is to develop an improved support unit cost allocation system for a medium-sized technology company, and to examine which options for overhead cost accounting exist. The study begins with presenting the terminology and methods associated with overhead cost accounting and responsibility accounting. Also the most common challenges and resulting benefits of overhead cost allocation system development are brought up. As one research method two case studies were conducted for benchmarking purposes. These external cases are compared with the principal company’s cost allocation system and reflected against the theoretical background. In the empirical section interviews were used as the primary source of information alongside self studying principal company’s old cost allocation method. Interviews revealed the main weaknesses of the old system and proposals for a new one, which were utilized in setting targets for developing the new system. As a result of the development process an improved support unit cost allocation system was realized for year 2009. The new system is able to handle support unit costs in more detail enhancing the transparency and fairness of resulting cost allocations. Parts of support unit costs are now seen as business units’ own costs rather than group-level overhead. Also recommendations for further development are made after analyzing how well the targets were reached.
Resumo:
The main goal of the thesis was to further develop harvester head saw device to the Finnish forest machine manufacturer. The work was done from the basis of the manufacturer´s current production model and the earlier study from this same subject called: “Development of chain saw for harvester” Tero Kaatrasalo, 2004. The work was focused to improving the serviceability and reliability of the saw device, but design also included adding few beforehand determined new features into the saw unit. This was done to give some added value for the end customer. The work includes analysis of the earlier saw devices and ideations of the improvements for the structure.
Resumo:
The effects of pulp processing on softwood fiber properties strongly influence the properties of wet and dry paper webs. Pulp strength delivery studies have provided observations that much of the strength potential of long fibered pulp is lost during brown stock fiber line operations where the pulp is merely washed and transferred to the subsequent processing stages. The objective of this work was to study the intrinsic mechanisms which maycause fiber damage in the different unit operations of modern softwood brown stock processing. The work was conducted by studying the effects of industrial machinery on pulp properties with some actions of unit operations simulated in laboratory scale devices under controlled conditions. An optical imaging system was created and used to study the orientation of fibers in the internal flows during pulp fluidization in mixers and the passage of fibers through the screen openings during screening. The qualitative changes in fibers were evaluated with existing and standardized techniques. The results showed that each process stage has its characteristic effects on fiber properties: Pulp washing and mat formation in displacement washers introduced fiber deformations especially if the fibers entering the stage were intact, but it did not decrease the pulp strength properties. However, storage chests and pulp transfer after displacement washers contributed to strength deterioration. Pulp screening proved to be quite gentle, having the potential of slightly evening out fiber deformations from very deformed pulps and vice versa inflicting a marginal increase in the deformation indices if the fibers were previously intact. Pulp mixing in fluidizing industrial mixers did not have detrimental effects on pulp strength and had the potential of slightly evening out the deformations, provided that the intensity of fluidization was high enough to allow fiber orientation with the flow and that the time of mixing was short. The chemical and mechanical actions of oxygen delignification had two distinct effects on pulp properties: chemical treatment clearly reduced pulp strength with and without mechanical treatment, and the mechanical actions of process machinery introduced more conformability to pulp fibers, but did not clearly contribute to a further decrease in pulp strength. The chemical composition of fibers entering the oxygen stage was also found to affect the susceptibility of fibers to damage during oxygen delignification. Fibers with the smallest content of xylan were found to be more prone to irreversibledeformations accompanied with a lower tensile strength of the pulp. Fibers poor in glucomannan exhibited a lower fiber strength while wet after oxygen delignification as compared to the reference pulp. Pulps with the smallest lignin content on the other hand exhibited improved strength properties as compared to the references.
Resumo:
The aim of this master’s thesis is to develop an algorithm to calculate the cable network for heat and power station CHGRES. This algorithm includes important aspect which has an influence on the cable network reliability. Moreover, according to developed algorithm, the optimal solution for modernization cable system from economical and technical point of view was obtained. The conditions of existing cable lines show that replacement is necessary. Otherwise, the fault situation would happen. In this case company would loss not only money but also its prestige. As a solution, XLPE single core cables are more profitable than other types of cable considered in this work. Moreover, it is presented the dependence of value of short circuit current on number of 10/110 kV transformers connected in parallel between main grid and considered 10 kV busbar and how it affects on final decision. Furthermore, the losses of company in power (capacity) market due to fault situation are presented. These losses are commensurable with investment to replace existing cable system.
Resumo:
Among the numerous approaches to food waste treatment, the food waste disposers method (FWDs), as a newcomer, has become slowly accepted by the general public owing to the worries about its impact on the existing sewage system. This paper aims to justify the role of FWDs in the process of urbanization in order to better prepare a city to take good care of the construction of its infrastructure and the solid waste treatment. Both the literatures and the case study help to confirm that FWDs has no negative effects on the wastewater treatment plant and it is also environmental friendly by reducing the greenhouse gas emissions. In the case study, the Lappeenranta waste water treatment plant has been selected in order to figure out the possible changes to a WWTP following the integration of FWDs: the observation shows only minor changes take place in a WWTP, in case of 25% application, like BOD up 7%, TSS up 6% and wastewater flowrate up 6%, an additional sludge production of 200 tons per year and the extra yield of methane up to 10000m3 per year; however, when the utilization rate of FWD is over 75%, BOD, TSS, and wastewater flowrate will experience more significant changes, thus exerting much pressure on the existing WWTP. FWDs can only be used in residential areas or cities equipped with consummate drainage network within the service sphere of WWTP, therefore, the relevant authority or government department should regulate the installation frequency of FWDs, while promoting the accessory application of FWDs. In the meanwhile, WWTP should improve their treatment process in order to expand their capacity for sludge treatment so as to stay in line with the future development of urban waste management.
Resumo:
In this doctoral thesis, a power conversion unit for a 10 kWsolid oxide fuel cell is modeled, and a suitable control system is designed. The need for research was identified based on an observation that there was no information available about the characteristics of the solid oxide fuel cell from the perspective of power electronics and the control system, and suitable control methods had not previously been studied in the literature. In addition, because of the digital implementation of the control system, the inherent characteristics of the digital system had to be taken into account in the characteristics of the solid oxide fuel cell (SOFC). The characteristics of the solid oxide fuel cell as well the methods for the modeling and control of the DC/DC converter and the grid converter are studied by a literature survey. Based on the survey, the characteristics of the SOFC as an electrical power source are identified, and a solution to the interfacing of the SOFC in distributed generation is proposed. A mathematical model of the power conversion unit is provided, and the control design for the DC/DC converter and the grid converter is made based on the proposed interfacing solution. The limit cycling phenomenon is identified as a source of low-frequency current ripple, which is found to be insignificant when connected to a grid-tied converter. A method to mitigate a second harmonic originating from the grid interface is proposed, and practical considerations of the operation with the solid oxide fuel cell plant are presented. At the theoretical level, the thesis discusses and summarizes the methods to successfully derive a model for a DC/DC converter, a grid converter, and a power conversion unit. The results of this doctoral thesis can also be used in other applications, and the models and methods can be adopted to similar applications such as photovoltaic systems. When comparing the results with the objectives of the doctoral thesis, we may conclude that the objectives set for the work are met. In this doctoral thesis, theoretical and practical guidelines are presented for the successful control design to connect a SOFC-based distributed generation plant to the utility grid.
Resumo:
The purpose of this thesis is to examine what the normative, effective social Intranet solution is for Tellabs Mobile Routing business unit in terms of sharing knowledge more openly and effectively, fostering innovation, and improving team spirit and positive employee experience. Additionally, these aspects are researched from the intra- and inter-organizational points of view. The research is based on previous literature and empirical interviews. Based on these two items, an eight-fold recommendation proposal was created to change the current Intranet to become an effective social Intranet.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.