971 resultados para Efficient dominating set
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.
Resumo:
Avhandlingen behandlar temat territoriell autonomi ur ett globalt perspektiv. Syftet är dels att kartlägga de territoriella autonomierna i världen och dels att visa hur en ny metod som fuzzy-set kan användas inom ämnesområdet jämförande politik. Forskningsproblemet är att försöka finna de bakgrundsfaktorer som förklarar uppkomsten av territoriell autonomi som sådant. Territoriella autonomier ses som särlösningar inom stater. Dessa regioner har erhållit en specialställning i förhållande till andra regioner inom respektive stat och även i förhållande till centralmakten i övrigt. Regionerna kan därför ses som undantag inom det enhetliga federala, regionala eller decentraliserade systemet inom en viss stat ifråga. En kartläggning visar att det finns 65 specialregioner fördelade på 25 stater i världen. De flesta av dessa utgörs av öar. Resultaten visar att det finns två vägar vilka leder till territoriell autonomi i allmänhet. Den ena vägen är en kombination av etnisk särprägel och liten befolkningsmängd, medan den andra vägen utgörs av kombinationen av historiska orsaker och geografiskt avstånd. Båda vägar är lika giltiga och förutsättningen är en demokratisk miljö.
Resumo:
The aim of the present dissertation is to investigate the marketing culture of research libraries in Finland and to understand the awareness of the knowledge base of library management concerning modern marketing theories and practices. The study was based onthe notion that a leader in an organisation can have large impact on its culture. Therefore, it was considered important to learn about the market orientation that initiates at the top management and flows throughout the whole organisationthus resulting in a particular kind of library culture. The study attempts to examine the marketing culture of libraries by analysing the marketing attitudes, knowledge (underlying beliefs, values and assumptions), behaviour (market orientation), operational policies and activities, and their service performance (customer satisfaction). The research was based on the assumption that if the top management of libraries has market oriented behaviour, then their marketing attitudes, knowledge, operational policies and activities and service performance should also be in accordance. The dissertation attempts to connect all these theoretical threads of marketing culture. It investigates thirty three academic and special libraries in the south of Finland. The library director and three to ten customers from each library participated as respondents in this study. An integrated methodological approach of qualitative as well as quantitative methods was used to gain knowledge on the pertinent issues lying behind the marketing culture of research libraries. The analysis of the whole dissertation reveals that the concept of marketing has very varied status in the Finnish research libraries. Based on the entire findings, three kinds of marketing cultures were emerged: the strong- the high fliers; the medium- the brisk runners; and the weak- the slow walkers. The high fliers appeared to be modern marketing believers as their marketing approach was customer oriented and found to be closer to the emerging notions of contemporary relational marketing. The brisk runners were found to be traditional marketing advocates as their marketing approach is more `library centred¿than customer defined and thus is in line of `product orientation¿ i.e. traditional marketing. `Let the interested customers come to the library¿ was appeared to be the hallmark of the slow walkers. Application of conscious market orientation is not reflected in the library activities of the slow walkers. Instead their values, ideology and approach to serving the library customers is more in tuneof `usual service oriented Finnish way¿. The implication of the research is that it pays to be market oriented which results in higher customer satisfaction oflibraries. Moreover, it is emphasised that the traditional user based service philosophy of Finnish research libraries should not be abandoned but it needs to be further developed by building a relational based marketing system which will help the libraries to become more efficient and effective from the customers¿ viewpoint. The contribution of the dissertation lies in the framework showing the linkages between the critical components of the marketing culture of a library: antecedents, market orientation, facilitators and consequences. The dissertationdelineates the significant underlying dimensions of market-oriented behaviour of libraries which are namely customer philosophy, inter-functional coordination,strategic orientation, responsiveness, pricing orientation and competition orientation. The dissertation also showed the extent to which marketing attitudes, behaviour, knowledge were related and impact of market orientation on the serviceperformance of libraries. A strong positive association was found to exist between market orientation and marketing attitudes and knowledge. Moreover, it also shows that a higher market orientation is positively connected with the service performance of libraries, the ultimate result being higher customer satisfaction. The analysis shows that a genuine marketing culture represents a synthesis of certain marketing attitudes, knowledge and of selective practices. This finding is particularly significant in the sense that it manifests that marketing culture consists of a certain sets of beliefs and knowledge (which form a specific attitude towards marketing) and implementation of a certain set of activities that actually materialize the attitude of marketing into practice (market orientation) leading to superior service performance of libraries.
Resumo:
Vaihtosuuntaajan IGBT-moduulin liitosten lämpötiloja ei voida suoraan mitata, joten niiden arviointiin tarvitaan reaaliaikainen lämpömalli. Tässä työssä on tavoitteena kehittää tähän tarkoitukseen C-kielellä implementoitu ratkaisu, joka on riittävän tarkka ja samalla mahdollisimman laskennallisesti tehokas. Ohjelmallisen toteutuksen täytyy myös sopia erilaisille moduulityypeille ja sen on tarvittaessa otettava huomioon saman moduulin muiden sirujen lämmittävä vaikutus toisiinsa. Kirjallisuuskatsauksen perusteella valitaan olemassa olevista lämpömalleista käytännön toteutuksen pohjaksi lämpöimpedanssimatriisiin perustuva malli. Lämpöimpedanssimatriisista tehdään Simulink-ohjelmalla s-tason simulointimalli, jota käytetään referenssinä muun muassa implementoinnin tarkkuuden verifiointiin. Lämpömalli tarvitsee tiedon vaihtosuuntaajan häviöistä, joten työssä on selvitetty eri vaihtoehtoja häviölaskentaan. Lämpömallin kehittäminen s-tason mallista valmiiksi C-kieliseksi koodiksi on kuvattu tarkasti. Ensin s-tason malli diskretoidaan z-tasoon. Z-tason siirtofunktiot muutetaan puolestaan ensimmäisen kertaluvun differenssiyhtälöiksi. Työssä kehitetty monen aikatason lämpömalli saadaan jakamalla ensimmäisen kertaluvun differenssiyhtälöt eri aikatasoille suoritettavaksi sen mukaan, mikä niiden kuvaileman termin vaatima päivitysnopeus on. Tällainen toteutus voi parhaimmillaan kuluttaa alle viidesosan kellojaksoja verrattuna suoraviivaiseen yhden aikatason toteutukseen. Implementoinnin tarkkuus on hyvä. Implementoinnin vaatimia suoritusaikoja testattiin Texas Instrumentsin TMS320C6727- prosessorilla (300 MHz). Esimerkkimallin laskemisen määritettiin kuluttavan vaihtosuuntaajan toimiessa 5 kHz kytkentätaajuudella vain 0,4 % prosessorin kellojaksoista. Toteutuksen tarkkuus ja laskentakapasiteetin vähäinen vaatimus mahdollistavat lämpömallin käyttämisen lämpösuojaukseen ja lisäämisen osaksi muuta jo prosessorilla olemassa olevaa systeemiä.
Resumo:
Diplomityössä toteutettiin varastonhallintajärjestelmän hankinta prosessoimalla tietoa hallitusti varastonhallinnan kehittämiseksi. Kvalitatiivinen tapaustutkimus tehtiin tietojohtamisen näkökulmasta käyttäjälähtöisen C-CEI –menetelmän avulla haastattelemalla ja havainnoimalla tapausyritystä. Haastatteluja tehtiin 10 kertaa. Tutkimuksessa kartoitettiin ja kilpailutettiin kotimaiset järjestelmätoimittajat, joita löydettiin 15. Näistä toimittajista 6 vastasi vaatimuksia ja teki tarjouksen. Tutkimuksen lopuksi tehtiin järjestelmän hankintaehdotus. Tutkimuksen teoreettinen viitekehys koostui tietojohtamisesta, varastonhallinnasta ja ohjelmistotuotannosta. Tutkimuksessa pääkysymyksenä esitettiin miten tietojohtamisen avulla voidaan tukea varastonhallintajärjestelmän hankintaa. Tutkimuksessa havaittiin, että tietojärjestelmän hankinta varastonhallinnan osalta on moniulotteinen liiketoiminnan kehittämistehtävä. Tietojärjestelmähankinta tulisi siten nähdä liiketoimintaa eteenpäin vievänä mahdollisuutena. Lisäksi loppukäyttäjän vaatimusten kartoittamiseen tulisi sitoutua huolella ja suhtautua kriittisesti eri järjestelmävaihtoehtoihin. Vaatimusmäärittelyn asettaminen ja vaatimusten priorisoiminen ovat haasteellisia tehtäviä. Tiedon hallinnan aikana saatiin selville, että loppukäyttäjän vaatimusten mukainen tarjouspyyntö edesauttaa ensinnäkin tarjousten tekemistä mutta myös niiden prosessointia ja analysointia. Vain oikealla tiedolla ja sen yhteyksillä on merkitystä. Tutkimuksen perusteella tietojärjestelmän hankinta suositellaan tehtäväksi hallitsemalla tietoa systemaattisesti erityisesti sen alkuvaiheen aikana. Tietojohtaminen virtaviivaistaa hankkeen etenemistä ja aikataulutusta, vähentää loppukäyttäjän kustannuksia tehokkaalla kilpailutusmenettelyllä, edistää kilpailukykyä ja luo hyvän lähtökohdan koko tietojärjestelmän hankinnan läpiviemiselle ja yhteistyö-suhteen rakentumiselle kauppakumppaneiden välille.
Resumo:
This Master´s thesis explores how the a global industrial corporation’s after sales service department should arrange its installed base management practices in order to maintain and utilize the installed base information effectively. Case company has product-related records, such as product’s lifecycle information, service history information and information about product’s performance. Information is collected and organized often case by case, therefore the systematic and effective use of installed base information is difficult also the overview of installed base is missing. The goal of the thesis study was to find out how the case company can improve the installed base maintenance and management practices and improve the installed base information availability and reliability. Installed base information management practices were first examined through the literature. The empirical research was conducted by the interviews and questionnaire survey, targeted to the case company’s service department. The research purpose was to find out the challenges related to case company´s service department’s information management practices. The study also identified the installed base information needs and improvement potential in the availability of information. Based on the empirical research findings, recommendations for improve installed base management practices and information availability were created. Grounding of the recommendations, the case company is suggested the following proposals for action: Service report development, improving the change management process, ensuring the quality of the product documentation in early stages of product life cycle and decision to improve installed base management practices.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
Tämän työn tarkoituksena on kehittää lyhyen tähtäimen kysynnän ennakointiprosessia VAASAN Oy:ssä, jossa osa tuotteista valmistetaan kysyntäennakoiden perusteella. Valmistettavien tuotteiden luonteesta johtuva varastointimahdollisuuden puuttuminen, korkea toimitusvarmuustavoite sekä tarvittavien ennakoiden suuri määrä asettavat suuret haasteet kysynnän ennakointiprosessille. Työn teoriaosuudessa käsitellään kysynnän ennustamisen tarvetta, ennusteiden käyttökohteita sekä kysynnän ennustamismenetelmiä. Pelkällä kysynnän ennustamisella ei kuitenkaan päästä toimitusketjun kannalta optimaaliseen lopputulokseen, vaan siihen tarvitaan kokonaisvaltaista kysynnän hallintaa. Se on prosessi, jonka tavoitteena on tasapainottaa toimitusketjun kyvykkyydet ja asiakkaiden vaatimukset keskenään mahdollisimman tehokkaasti. Työssä tutkittiin yrityksessä kolmen kuukauden aikana eksponentiaalisen tasoituksen menetelmällä laadittuja ennakoita sekä ennakoijien tekemiä muutoksia niihin. Tutkimuksen perusteella optimaalinen eksponentiaalisen tasoituksen alfa-kerroin on 0,6. Ennakoijien tilastollisiin ennakoihin tekemät muutokset paransivat ennakoiden tarkkuutta ja ne olivat erityisen tehokkaita toimituspuutteiden minimoimisessa. Lisäksi työn tuloksena ennakoijien käyttöön saatiin monia päivittäisiä rutiineja helpottavia ja automatisoivia työkaluja.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The purpose of this study was to evaluate the uniformity of distribution coefficient (UDC) and coefficient of variation (CV) of a familiar set of irrigation, classifying it the ASAE standard. The irrigation and fertigation are determined by two methods the KELLER & KARMELI and DENÍCULI . The two experiments were subjected to varying pressures: 12, 14, 16 and 18 kPa, in a completely randomized design of twenty samples composed of flows with three replications. Urea, potassium chloride (KCl) and ammonium phosphate (MAP) were the elements used for fertigation. The system consisted of a 200 L tank, which supplied another container of 30 L, it was moved vertically to control the pressure. The data was statistically compared between treatments for each methodology. In fertigation the best pressure was 16 kPa and was classified as "excellent" for UDC (91.03%) and "marginal" for C.V. (7.47%). For the irrigation treatment, the best pressure was 16 kPa rated "excellent" for UDC (91.2%) and "marginal" for C.V. (7.68%). The DENÍCULI et al. (1980) methodology proved more reliable for the evaluation of drip systems. It was observed that this set has good uniformity of distribution, but with great variability in flows.
Resumo:
Due to the lack of information concerning maximum rainfall equations for most locations in Mato Grosso do Sul State, the alternative for carrying out hydraulic work projects has been information from meteorological stations closest to the location in which the project is carried out. Alternative methods, such as 24 hours rain disaggregation method from rainfall data due to greater availability of stations and longer observations can work. Based on this approach, the objective of this study was to estimate maximum rainfall equations for Mato Grosso do Sul State by adjusting the 24 hours rain disaggregation method, depending on data obtained from rain gauge stations from Dourado and Campo Grande. For this purpose, data consisting of 105 rainfall stations were used, which are available in the ANA (Water Resources Management National Agency) database. Based on the results we concluded: the intense rainfall equations obtained by pluviogram analysis showed determination coefficient above 99%; and the performance of 24 hours rain disaggregation method was classified as excellent, based on relative average error WILMOTT concordance index (1982).
Resumo:
Precision irrigation seeks to establish strategies which achieve an efficient ratio between the volume of water used (reduction in input) and the productivity obtained (increase in production). There are several studies in the literature on strategies for achieving this efficiency, such as those dealing with the method of volumetric water balance (VWB). However, it is also of great practical and economic interest to set up versatile implementations of irrigation strategies that: (i) maintain the performance obtained with other implementations, (ii) rely on few computational resources, (iii) adapt well to field conditions, and (iv) allow easy modification of the irrigation strategy. In this study, such characteristics are achieved when using an Artificial Neural Network (ANN) to determine the period of irrigation for a watermelon crop in the Irrigation Perimeter of the Lower Acaraú, in the state of Ceará, Brazil. The Volumetric Water Balance was taken as the standard for comparing the management carried out with the proposed implementation of ANN. The statistical analysis demonstrates the effectiveness of the proposed management, which is able to replace VWB as a strategy in automation.