9 resultados para INVARIANT-MANIFOLDS

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evoluutioalgoritmit ovat viime vuosina osoittautuneet tehokkaiksi menetelmiksi globaalien optimointitehtävien ratkaisuun. Niiden vahvuutena on etenkin yleiskäyttöisyys ja kyky löytää globaali ratkaisu juuttumatta optimoitavan tavoitefunktion paikallisiin optimikohtiin. Tässä työssä on tavoitteena kehittää uusi, normaalijakaumaan perustuva mutaatio-operaatio differentiaalievoluutioalgoritmiin, joka on eräs uusimmista evoluutiopohjaisista optimointialgoritmeista. Menetelmän oletetaan vähentävän entisestään sekä populaation ennenaikaisen suppenemisen, että algoritmin tilojen juuttumisen riskiä ja se on teoreettisesti osoitettavissa suppenevaksi. Tämä ei päde alkuperäisen differentiaalievoluution tapauksessa, koska on voitu osoittaa, että sen tilanmuutokset voivat pienellä todennäköisyydellä juuttua. Työssä uuden menetelmän toimintaa tarkastellaan kokeellisesti käyttäen testiongelmina monirajoiteongelmia. Rajoitefunktioiden käsittelyyn käytetään Jouni Lampisen kehittämää, Pareto-optimaalisuuden periaatteeseen perustuvaa menetelmää. Samalla saadaan kerättyä lisää kokeellista näyttöä myös tämän menetelmän toiminnasta. Kaikki käytetyt testiongelmat kyettiin ratkaisemaan sekä alkuperäisellä differentiaalievoluutiolla, että uutta mutaatio-operaatiota käyttävällä versiolla. Uusi menetelmä osoittautui kuitenkin luotettavammaksi sellaisissa tapauksissa, joissa alkuperäisellä algoritmilla oli vaikeuksia. Lisäksi useimmat ongelmat kyettiin ratkaisemaan luotettavasti pienemmällä populaation koolla kuin alkuperäistä differentiaalievoluutiota käytettäessä. Uuden menetelmän käyttö myös mahdollistaa paremmin sellaisten kontrolliparametrien käytön, joilla hausta saadaan rotaatioinvariantti. Laskennallisesti uusi menetelmä on hieman alkuperäistä differentiaalievoluutiota raskaampi ja se tarvitsee yhden kontrolliparametrin enemmän. Uusille kontrolliparametreille määritettiin kuitenkin mahdollisimman yleiskäyttöiset arvot, joita käyttämällä on mahdollista ratkaista suuri joukko erilaisia ongelmia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämä diplomityö käsittelee teollisen yrityksen tuotannonohjauksen kehittämistä piensarjatuotannossa. Työn kohteena on ABB Oy:n Tuulivoimageneraattorit-tulosyksikkö, joka valmistaa vakiotuotteita asiakasohjautuvasti. Työssä esitellään aluksi tuotannon ja tuotannonohjauksen teoriaa. Lävitse käydään perusasioiden kuten määritelmien, tavoitteiden ja tehtävien lisäksi tuotannonohjausprosessia sekä tuotannonohjauksen tietotekniikkaa. Teorian jälkeisessä empiriaosuudessa esitellään työssä kehitettyjä keinoja tuotannonohjauksen parantamiseksi. Tutkimus on toteutettu teoreettisen ja empiirisen tutkimustyön avulla. Teoreettiseen tutkimustyöhön sisältyi suomalaisiin ja ulkomaalaisiin kirjallisuuslähteisiin perehtyminen. Empiirinen tutkimustyö suoritettiin itsenäisen ongelman ratkaisutyön avulla. Tämä sisälsi kehittämiskohteiden analysoinnin, tarkempien kehittämistarpeiden määrityksen sekä kokeilujen kautta tapahtuneen kehittämistyön. Tutkimuksen päätavoitteena oli selvittää, miten tuotannonohjauksen kehittämisellä voidaan parantaa kohteena olevan tulosyksikön tuottavuutta ja kannattavuutta. Päätavoitteen pohjalta muodostettiin kuusi osatavoitetta: toimitusvarmuuden parantaminen, kapasiteetin kuormitusasteen nostaminen, kapasiteetin suunnittelun kehittäminen, läpäisyaikojen lyhentäminen, uuden ERP-järjestelmän vaatimusmäärittely sekä tuotannonohjausprosessin määrittäminen. Työssä rakennettiin neljään ensiksi mainittuun osatavoitteeseen tietotekniset sovellukset, jotka mahdollistavat osatavoitteiden suunnittelun ja ohjaamisen. Sovelluksia varten kullekin tuotteelle määriteltiin esimerkiksi työnvaiheketjut läpäisyaikoineen, kuormitusryhmät, kuormitusryhmien kapasiteetit, tuotteiden kuormittavuudet sekä kriittiset työvälineet. Työ osoitti, että tietotekniikka auttaa suuresti tuotannonohjauksessa. Lisääntynyt läpinäkyvyys, parantunut tiedonkulku, simulointimahdollisuudet sekä graafinen esitystapa helpottavat erilaisten suunnitelmien teossa ja parantavat siten päätöksenteon laatua. Tietotekniikan hyväksikäytön pohjana toimii tuotannon perus- ja tapahtumatietojen kurinalainen päivitys. Tämän vuoksi tietojärjestelmistä kannattaa rakentaa mahdollisimman yksinkertaisia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented