15 resultados para Submultiplicative graphs
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Tutkimuksen tavoitteena oli kerätä Elcoteq Network Oyj:n myyntihenkilöiden tietotarpeet ja konkretisoida ne raportoinnin kautta. Tutkimus keskittyi asiakasvirran eri vaiheissa tarvittavaan asiakas- ja projektikohtaiseen tietoon. Tarkoituksena oli parantaa raportointia vastaamaan asiakkaan ja sales case:n hallintaa alkaen asiakasvirran uusasiakasvaiheesta ja projektin arvioinnista. Tietotarpeet kerättiin haastattelujen avulla ja osallistumalla aiheeseen liittyviin projekteihin. Haastattelujen ja teorian avulla projektiliiketoiminnan avainpiirteet ja asiakaskannattavuuteen vaikuttavat tekijät kerättiin yhteen ja muutettiin konkretiaksi raportoinnin parannusehdotuksien kautta. Tutkimus osoitti, että myyntihenkilöiden näkökulmasta olisi muodostettava uudet raportit tukemaan asiakkaan ja projektin hallintaa. Ennustettu voidaan verrata toteutuneeseen ja sekä asikkuuden, että sales case:n seuranta paranee uusien raporttien avulla. Budjetit, sekä asiakaskohtaiset tavoitteet voidaan laatia luotettavimmin ja kokonaiskuva asiakkuuden ja projektin kannattavuudesta pystytään näkemään ko. raporteista sekä graafein, että numeerisena tietona.
Resumo:
Diplomityössä käsitellään Nokia Mobile Phonesin matkapuhelimien käyttöliittymäohjelmistojen suunnittelu-ja testausympäristön kehitystä. Ympäristöön lisättiin kaksi ohjelmistomodulia avustamaan simulointia ja versionhallintaa. Visualisointityökalulla matkapuhelimen toiminta voidaan jäljittää suunnittelu- kaavioihin tilasiirtyminä, kun taas vertailusovelluksella kaavioiden väliset erot nähdään graafisesti. Kehitetyt sovellukset parantavat käyttöliittymien suunnitteluprosessia tehostaen virheiden etsintää, optimointia ja versionhallintaa. Visualisointityökalun edut ovat merkittävät, koska käyttöliittymäsovellusten toiminta on havaittavissa suunnittelu- kaavioista reaaliaikaisen simuloinnin yhteydessä. Näin virheet ovat välittömästi paikannettavissa. Lisäksi työkalua voidaan hyödyntää kaavioita optimoitaessa, jolloin sovellusten kokoja muistintarve pienenee. Graafinen vertailutyökalu tuo edun rinnakkaiseen ohjelmistosuunnitteluun. Eri versioisten suunnittelukaavioiden erot ovat nähtävissä suoraan kaaviosta manuaalisen vertailun sijaan. Molemmat työkalut otettiin onnistuneesti käyttöön NMP:llä vuoden 2001 alussa.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.
Resumo:
Tämän tutkielman tarkoituksena oli löytää keinoja uusien tuotteiden tavoitevarastoarvojen määrittämisen helpottamiseksi case-yrityksessä. Ongelma on seurausta siitä, ettei uusille tuotteille ole saatavissa kysyntädataa. Oikeelliset tavoitevarastoarvot on tärkeitä case-yritykselle hävikin minimoimisen ja erityisesti asiakastyytyväisyyden vuoksi. Tutkittavaa aihetta lähestyttiin päivittäistavarakaupan strategisten menestystekijöiden sekä varastonhallinnan kautta. Teoriaosuuden pohjalta toteutettiin empiirinen osio, jossa etsittiin case-yrityksen kahden tuoteryhmän kysyntää selittäviä tekijöitä paneeliregression avulla. Ensin kuitenkin tutkittiin tuoteryhmien yleistä kysynnän rakennetta erilaisten kysyntäkuvaajien sekä korrelaatiomatriisien avulla. Tavoitevarastoarvoon eniten vaikuttava tekijä on kysyntä. Brändi ja hinta muodostuivat puolestaan vahvimmiksi kysyntää määrittäviksi muuttujiksi. Kysyntää arvioitaessa muut sitä nostavat muuttujat tulee aina miettiä tapauskohtaisesti. Mehukeitoilla kysyntää nostivat eniten vahva brändi ja suuri pakkauskoko. Juustoraasteilla puolestaan alhainen hinta ja pieni pakkauskoko olivat keskeisimmät myyntiin vaikuttavat tekijät. Tuoteryhmien muut kysyntää nostavat ominaisuudet olivat samoja, kuin tuoteryhmän suosituimmilla tuotteilla.
Resumo:
Tässä diplomityössä jatkettiin Loviisan voimalaitoksen höyryturbiinien suorituskyvyn parannuspotentiaalien tutkimusta. Tavoitteena oli kehittää laitoksen höyryturbiinien suorituskyvyn käytönaikaisia on-line-mittauksia. Selvityksessä perehdyttiin norjalaisen IFE:n kehittämään stationääritilan TEMPOohjelmaan( The Thermal Performance Monitoring And Optimisation system), sen käyttöohjeisiin ja toimintaperiaatteisiin. Työssä esiteltiin laajasti tiedon yhteensovittamisen laskentateoriaa, johon TEMPOn toiminta perustuu. Työssä tarkasteltiin turbiinin todellista paisuntaprosessia, koska sen ymmärtäminen on tärkeässä osassa turbiinin suorituskyvyn valvonnassa. Tutkimuksessa esiteltiin myös turbiineille mahdollisia vikoja sekä niiden syntymisprosesseja. Työssä tarkasteltiin TEMPOn sovittamien tulostiedostojen analysointiohjelman toimivuutta havaitsemalla itse aiheutettuja poikkeamia todellisiin mittaustiedostoihin. Analysointiohjelmalla muodostettuja kuvaajia vertailtiin todellisen prosessin ajotilanteen kuvaajiin ja tarkasteltiin, kuinka poikkeamia on mahdollista havaita kuvaajien avulla. TEMPO-ohjelmalle löydettiin tutkimuksen edetessä kehittämisehdotuksia. Näillä muutoksilla ohjelma saadaan mallintamaan Loviisan voimalaitoksen turbiiniprosessia tarkemmin ja tuloksista saadaan hyödyllisempiä.
Resumo:
Pohjois-Savossa seurattiin talviaikaista happitilannetta vuosina 1997-2008. Seurannan kohteena oli neljä pienehköä järveä (Iso-Valkeinen, Kevätön, Kolmisoppi ja Vehmasjärvi), jotka ovat erityyppisiä syvyydeltään, rehevyystasoltaan ja humuspitoisuudeltaan. Näiden esimerkkijärvien oli tarkoitus antaa yleisemminkin viitteitä happitilanteen kehityksestä talven aikana. Alkutalven tulosten perusteella annettiin vuosittain tiedote, jossa arvioitiin happikatojen mahdollisuutta kevättalven kuluessa. Yleisöllä oli myös mahdollisuus seurata happi- ja lämpötilatuloksia Pohjois-Savon ympäristökeskuksen verkkopalvelun kautta. Hapenkulutusnopeus oli rehevimmässä seurantajärvessä kaksinkertainen verrattuna karumpiin ja syvyyden myötä ero vain korostui. Pohjanläheisessä vesikerroksessa 1 mg/l:n happipitoisuus kului karuissa järvissä noin kahdessa viikossa ja rehevässä noin kolmessa päivässä. Vuosien välinen vaihtelu oli kuitenkin hyvin suurta. Vaihtelu oli suurta myös karuissa järvissä. Veden jäätymisajankohdalla ja veden lämpötilalla on merkittävä vaikutus siihen, millainen kevättalven happitilanteesta muodostui. Seurantajärvien aineiston perusteella voidaan karkeasti arvioida, että kuukauden viivästyminen jäätymisessä tai vesipatsaan viilentyminen ennen jäätymistä asteen verran kylmemmäksi merkitsevät noin kolmanneksen korkeampaa happipitoisuutta kevättalvella. Vesipatsaan happitilanteen heikentymisen sekä ravinne- ja rautapitoisuuksien välillä todettiin vahvat yhteydet. Kaikkien järvien aineistossa happitilanteen heikkeneminen johti voimakkaimmin alusveden ammoniumtyppi-, kokonaisfosfori- ja rautapitoisuuksien kasvuun. Pitoisuusmuutokset olivat talven aikana suurimmat rehevimmässä kohdejärvessä, Kevättömässä, jossa kokonaisfosforipitoisuudet keskimäärin kymmenkertaistuivat, fosfaattifosforipitoisuudet kasvoivat keskimäärin 20-kertaisiksi ja rautapitoisuudet yli seitsenkertaisiksi.
Resumo:
Tämän kandidaatintyön tavoitteena oli selvittää mahdollisuuksia 14C:n kemiallisten muotojen eriyttämiseen käyttäen Loviisan voimalaitoksella olemassa olevaa näytteenkeräyslaitteistoa. Lisäksi tarkoituksena oli selvittää parhaiten tähän käyttötarkoitukseen soveltuva zeoliittityyppiä tyypeistä 4A, 5A ja 13X. Työn kirjallisessa osassa käsitellään ydinvoimalaitoksen C14-päästöjä keskittyen pääosin Loviisan VVER-laitokseen. Adsorption osalta esitellään kaupallisesti käytettyjä adsorptiomateriaaleja ja paneudutaan adsorptioon fysikaalisena ja kemiallisena ilmiönä. Lisäksi esitellään kahden desorptiomenetelmän perusperiaatteet. Kirjallisen osan lopussa kootaan tutkimukseen vaikuttavia tekijöitä ja esitellään aiemmin käytössä ollut näytteenkeräyslaitteisto. Kokeellisessa osassa esitellään työssä käytetyt laitteistot. Lisäksi on kuvattu mittausten suoritus nestetuikelaskurilla. Tämän jälkeen työssä esitellään mittaustuloksien käsittely ja näin saadut tulokset.
Resumo:
Pumping systems account for up to 22 % of the energy consumed by electrical motors in European industry. Many studies have shown that there is also a lot of potential for energy savings in these systems with the improvement of devices, flow control or surrounding sys-tem. The best method for more energy efficient pumping has to be found for each system separately. This thesis studies how energy saving potential in reservoir pumping system is affected by surrounding variables, such as the static head variation and friction factor. The objective is to create generally applicable graphs to quickly compare methods for reducing pumping system’s energy costs. The gained results are several graphs showcasing how the chosen variables affect energy saving potential of the pumping system in one specific case. To judge if these graphs are generally applicable, more testing with different pumps and environments are required.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.