36 resultados para Subset Sum Problem
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '
Resumo:
Abstract
Resumo:
Abstract
Resumo:
Diplomityön tutkimusfunktio käsittelee toimeksiantajan, Stora Enso Timber Oy Ltd Kotkan sahan, sahalinjan optimointikokonaisuuden relevanttia ongelmakenttää. Tutkimuksen alussa profiloivan sahalinjan tukinpyörityksen, pelkan sivuttaissiirron tai sivulautaoptimoinnin toimivuudesta ei ollut varmuutta. Työn painopistealue on sivulautaoptimointi, jonka toimivuus tutkimuksen alussa on hyvin kyseenalaista tuotantoajossa. Työn tavoitteet kiteytyvät paremman raaka-aineen käyttösuhteen saavuttamiselle, jolloin pitkän aikajänteen kannattavuus on realistisempaa saavuttaa. Kotkan sahalinjan optimointijärjestelmässä on kokonaisuudessaan saavutettu tuotantoajoon hyväksyttävä taso. Tukinpyörityksen tarkkuudessa saavutettiin asetettu tavoite, eli 90 % pyöritystuloksista menee virheikkunaan ± 10° , sekä virheen keskiarvon itseisarvo jasen ympärillä olevan hajonnan summa on maksimissaan 10° . Pelkan sivuttaissiirto todettiin tutkimuksessa sekundääriseksi optimointijärjestelmäksi. Ohjaus perustuu tukkimittarin mittaamaan dataan, jolloin tukinpyörityksen hajonta aiheuttaa epätarkkuutta pelkan suuntauksessa. Pelkan sivuttaisiirron käyttäminen vaatii lisämittauksia, jolloin voidaan varmistua pelkan suuntauksen optimoinnin toimivuudesta. Sivulautaoptimoinnin toimivuuden kehittämisessä saavutettiin se taso, missä todellista kehitystyötä voidaan tehdä. Koeajoissa ja optimointiohjelman tarkastamisessa havaittiin periaatteellisia virheitä, jotka korjattiin. Toimivan sivulautaoptimoinnin myötä on mahdollista hallita paremmin tuotannonohjaus, jolloin tuotanto voidaan etenkin sivulautojen osalta kohdentaa paremmin vastaamaan kysyntää sekä asete-erän hyvää käyttösuhdetta. Raaka-aineen käyttösuhde on parantunut. Yksittäisten asetevertailujen sekä esimerkkilaskelmien perusteella tuottopotentiaali tukinpyörityksen ja sivulautaoptimoinnin osalta on 0,6...1,5 MEUR Välillinen tuottopotentiaali on suurempi, koska tuotantoprosessi sahauksen osalta on erittäin joustava markkinoiden tarpeen muutoksille. Sahalinjalla on mahdollista tuottaa helposti laajalla tukkisumalla laajaa tuotematriisia, jossa sivulautaoptimoinnilla on avainrooli. Tuotannonsuunnittelufunktiota tulee kehittää vastaamaan mahdollisuuksiin,joita sivulautaoptimointi tarjoaa. Tuotematriisi ja sitä vastaavat asetteet lankeavalla tukkisumalla tulee rakentaa uudestaan niiltä osin, joihin sivulautaoptimointi antaa variaatiomahdollisuuksia.
Resumo:
Tämän tutkielman tavoitteena on tutkia peso-ongelmaa sekä devalvaatio-odotuksia seuraavissa Latinalaisen Amerikan maissa: Argentiina, Brasilia, Costa Rica, Uruguay ja Venezuela. Lisäksi tutkitaan, onko peso-ongelmalla mahdollista selittää korkojen epäsäännöllistä käyttäytymistä ennen todellisen devalvaation tapahtumista. Jotta näiden tutkiminen olisi mahdollista, lasketaan markkinoiden odotettu devalvaation todennäköisyys tutkittavissa maissa. Odotettu devalvaation todennäköisyys lasketaan aikavälillä tammikuusta 1996 joulukuuhun 2006 käyttäen kahta erilaista mallia. Korkoero-mallin mukaan maiden välisestä korkoerosta on mahdollista laskea markkinoiden devalvaatio-odotukset. Toiseksi, Probit-mallissa käytetään useita makrotaloudellisia tekijöitä selittävinä muuttujina laskettaessa odotettua devalvaation todennäköisyyttä. Lisäksi tutkitaan, miten yksittäisten makrotaloudellisten muuttujien kehitys vaikuttaa odotettuun devalvaation todennäköisyyteen. Empiiriset tulokset osoittavat, että tutkituissa Latinalaisen Amerikan maissa oli peso-ongelma aikavälillä tammikuusta 1996 joulukuuhun 2006. Korkoero-mallin tulosten mukaan peso-ongelma löytyi kaikista muista tutkituista maista lukuun ottamatta Argentiinaa. Vastaavasti Probit-mallin mukaan peso-ongelma löytyi kaikista tutkituista maista. Tulokset osoittavat myös, että korkojen epäsäännöllinen kehitys ennen varsinaista devalvaatiota on mahdollista selittää peso-ongelmalla. Probit-mallin tulokset osoittavat lisäksi, että makrotaloudellisten muuttujien kehityksellä ei ole mitään tiettyä kaavaa liittyen siihen, kuinka ne vaikuttavat markkinoiden devalvaatio-odotuksiin Latinalaisessa Amerikassa. Pikemmin vaikutukset näyttävät olevan maakohtaisia.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.
Resumo:
Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.
Resumo:
Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.
Resumo:
Mottling is one of the key defects in offset-printing. Mottling can be defined as unwanted unevenness of print. In this work, diameter of a mottle spot is defined between 0.5-10.0 mm. There are several types of mottling, but the reason behind the problem is still not fully understood. Several commercial machine vision products for the evaluation of print unevenness have been presented. Two of these methods used in these products have been implemented in this thesis. The one is the cluster method and the other is the band-pass method. The properties of human vision system have been taken into account in the implementation of these two methods. An index produced by the cluster method is a weighted sum of the number of found spots, and an index produced by band-pass method is a weighted sum of coefficients of variations of gray-levels for each spatial band. Both methods produce larger indices for visually poor samples, so they can discern good samples from the poor ones. The difference between the indices for good and poor samples is slightly larger produced by the cluster method. 11 However, without the samples evaluated by human experts, the goodness of these results is still questionable. This comparison will be left to the next phase of the project.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.