15 resultados para Notation musicale. catalane
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
WCDMA tukiasema (Node B) on osa UMTS-järjestelmän radioverkkoa. Node B on tärkeä verkkoelementti, jonka tarkoituksena on yhdistää mobiilikäyttäjät verkkoon. Telecom –ohjelmisto (TCOM SW) on vastuussa suuresta osasta Node B:n toiminnallisuutta. TCOM SW:n testaukseen käytetään paljon resursseja, jotta ohjelmiston oikeasta toiminnasta ja laadusta voidaan varmistua. System component testing on testausvaihe, jossa järjestelmän (Node B) osa (system component, tässä diplomityössä TCOM SW) testataan ennen sen integroimista muuhun järjestelmään. Tähän tarvitaan testityökalu ja testitapausten toteutus. Node B TTCN Tester (testeri) on työkalu, jota käytetään Node B:n ohjelmiston testauksessa. Testitapaukset toteutetaan TTCN-testinotaatiota käyttäen ja testataan testerin avulla. TCOM SW:n system component –testausvaihetta varten testeriin lisättiin uudet rajapinnat, joiden avulla voidaan simuloita Node B:n ATM-ohjelmistoa sekä WPA- ja WTR-yksiköitä. Tässä diplomityössä toteuttiin TTCN testitapaukset uusille rajapinnoille. Testitapaukset tekivät TCOM SW system component –testausvaiheen riippumattomaksi Node B:n ATM-ohjelmistosta sekä WPA- ja WTR-yksiköistä. Lisäksi TCOM SW:n toiminnan testaus näissä rajapinnoissa voidaan tästä lähtien tehdä automaattisesti. Testitapauksien toiminta varmistettiin testeriä käyttäen. Tulokset olivat hyviä, uudet testitapaukset ja TTCN rajapinnat toimivat oikein lisäten testauksen tehokkuutta.
Resumo:
Vaatimus kuvatiedon tiivistämisestä on tullut entistä ilmeisemmäksi viimeisen kymmenen vuoden aikana kuvatietoon perustuvien sovellutusten myötä. Nykyisin kiinnitetään erityistä huomiota spektrikuviin, joiden tallettaminen ja siirto vaativat runsaasti levytilaa ja kaistaa. Aallokemuunnos on osoittautunut hyväksi ratkaisuksi häviöllisessä tiedontiivistämisessä. Sen toteutus alikaistakoodauksessa perustuu aallokesuodattimiin ja ongelmana on sopivan aallokesuodattimen valinta erilaisille tiivistettäville kuville. Tässä työssä esitetään katsaus tiivistysmenetelmiin, jotka perustuvat aallokemuunnokseen. Ortogonaalisten suodattimien määritys parametrisoimalla on työn painopisteenä. Työssä todetaan myös kahden erilaisen lähestymistavan samanlaisuus algebrallisten yhtälöiden avulla. Kokeellinen osa sisältää joukon testejä, joilla perustellaan parametrisoinnin tarvetta. Erilaisille kuville tarvitaan erilaisia suodattimia sekä erilaiset tiivistyskertoimet saavutetaan eri suodattimilla. Lopuksi toteutetaan spektrikuvien tiivistys aallokemuunnoksen avulla.
Resumo:
Tämä diplomityökuuluu tietoliikenneverkkojen suunnittelun tutkimukseen ja pohjimmiltaan kohdistuu verkon mallintamiseen. Tietoliikenneverkkojen suunnittelu on monimutkainen ja vaativa ongelma, joka sisältää mutkikkaita ja aikaa vieviä tehtäviä. Tämä diplomityö esittelee ”monikerroksisen verkkomallin”, jonka tarkoitus on auttaa verkon suunnittelijoita selviytymään ongelmien monimutkaisuudesta ja vähentää verkkojen suunnitteluun kuluvaa aikaa. Monikerroksinen verkkomalli perustuu yleisille objekteille, jotka ovat yhteisiä kaikille tietoliikenneverkoille. Tämä tekee mallista soveltuvan mielivaltaisille verkoille, välittämättä verkkokohtaisista ominaisuuksista tai verkon toteutuksessa käytetyistä teknologioista. Malli määrittelee tarkan terminologian ja käyttää kolmea käsitettä: verkon jakaminen tasoihin (plane separation), kerrosten muodostaminen (layering) ja osittaminen (partitioning). Nämä käsitteet kuvataan yksityiskohtaisesti tässä työssä. Monikerroksisen verkkomallin sisäinen rakenne ja toiminnallisuus ovat määritelty käyttäen Unified Modelling Language (UML) -notaatiota. Tämä työ esittelee mallin use case- , paketti- ja luokkakaaviot. Diplomityö esittelee myös tulokset, jotka on saatu vertailemalla monikerroksista verkkomallia muihin verkkomalleihin. Tulokset osoittavat, että monikerroksisella verkkomallilla on etuja muihin malleihin verrattuna.
Resumo:
Tässä diplomityössä käsitellään henkilökohtaisen tiedon saannin kontrollointia ja tiedon kuvaamista. Työn käytännön osuudessa suunniteltiin XML –malli henkilökohtaisen tiedon kuvaamiseen. Henkilökohtaisten tietojen käyttäminen mahdollistaa henkilökohtaisen palvelun tarjoamisen ja myös palvelun automatisoinnin käyttäjälle. Henkilökohtaisen tiedon kuvaaminen on hyvin oleellista, jotta palvelut voivat kysellä ja ymmärtää tietoja. Henkilökohtaiseen tietoon vaikuttaa erilaisia tekijöitä, jotka on myös otettava huomioon tietoa kuvattaessa. Henkilökohtaisen tiedon leviäminen eri palveluiden tarjoajille tuo mukanaan myös riskejä. Henkilökohtaisen tiedon joutuminen väärän henkilön käsiin saattaa aiheuttaa vakaviakin ongelmia tiedon omistajalle. Henkilökohtaisen tiedon turvallisen ja luotettavan käytettävyyden kannalta onkin hyvin oleellista, että käyttäjällä on mahdollisuus kontrolloida kenelle hän haluaa luovuttaa mitäkin tietoa.
Resumo:
Sähköiset huutokaupat ovat virtuaalisia markkinapaikkoja, jotka sijaitsevat jossain päin internetiä. Sähköistä huutokauppaa käydään yritysten välillä (B2B), yritysten ja kuluttajien välillä (B2C) sekä kuluttajien kesken (C2C). Tässä työssä sähköisellä huutokaupalla tarkoitetaan ensin mainittua, yritysten keskinäistä kaupankäyntiä. Työn tarkoituksena on tutkia työnkulkukoneen soveltuvuutta sähköisen huutokauppajärjestelmän moottorina. Työssä perehdytään avoimen lähdekoodin ActiveBPEL-koneeseen, ja tutkimus tapahtuu suunnittelemalla, mallintamalla ja testaamalla liiketoimintaprosessi, joka rekisteröi ostajan ja myyjän tiedot järjestelmään. Toteutettava prosessi on yksi osa sähköistä huutokauppaa, mutta saman periaatteen mukaisesti olisi mahdollista toteuttaa myös kokonainen huutokauppa. Tässä työssä tarkastellaan sähköistä huutokauppaa, joka perustuu web-palveluihin, ja jolla on selvä koordinaattori. Koordinaattori ohjaa toisia mukana olevia web-palveluja ja niiden ajettavia operaatioita. Korkean tason mallit kuvataan BPMN-notaation avulla, itse prosessi toteutetaan BPEL-kielellä. Prosessin mallinnuksessa ja simuloinnissa käytetään apuna ActiveBPEL Designer -ohjelmaa. Työn tavoitteena on paitsi toteuttaa osa huutokaupasta, myös antaa lukijalle käsitys siitä liiketoimintaympäristöstä, johon huutokauppa kuuluu, sekä valottaa huutokaupan taustalla olevia teknologioita. Erityisesti web-palvelut ja niihin liittyvät käsitteet tulevat lukijalle tutuiksi.
Resumo:
The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.
Resumo:
This paper presents the design for a graphical parameter editor for Testing and Test Control Notation 3 (TTCN-3) test suites. This work was done in the context of OpenTTCN IDE, a TTCN-3 development environment built on top of the Eclipse platform. The design presented relies on an additional parameter editing tab added to the launch configurations for test campaigns. This parameter editing tab shows the list of editable parameters and allows opening editing components for the different parameters. Each TTCN-3 primitive type will have a specific editing component providing tools to ease modification of values of that type.
Resumo:
Liiketoiminnot käyttävät useita erillisiä tietojärjestelmiä. Toimintaprosessit sisältävät useiden eri liiketoimintojen suorittamia tehtäviä. Tehtävien tarvitsemien ja tuottamien tietojen sujuvan virtauksen toteutuminen vaatii tietojen ja tietojärjestelmien integraatiota, jota on toteutettu perinteisesti järjestelmien välisillä suorilla yhteyksillä. Tästä seuraa IT-arkkitehtuurin joustamattomuutta. Palvelulähtöisellä arkkitehtuurilla (Service Oriented Architecture, SOA) luvataan IT-arkkitehtuurille parempaa joustavuutta ja toisaalta kustannussäästöjä. Työssä selvitettiin palveluarkkitehtuurin teoreettinen tausta sekä palvelulähtöisen prosessikuvauskielen BPMN ideaa. Empiirisessä osuudessa haettiin teemahaastattelujen avulla kohdeyrityksen ja sen käyttämien järjestelmätoimittajien näkemyksiä palveluarkkitehtuurista ja siihen vaikuttavista tekijöistä. Lisäksi työssä selvitettiin palveluarkkitehtuurista saatavia vastauksia kohdeyrityksen IT-strategiassa esitettyihin tavoitteisiin. Työn tuloksena analysoitiin palvelupohjaista mallinnusmenetelmää noudattaen prosessi- ja palvelukuvaus sekä tunnistettiin niitä tukevat SOA-palvelut. Menetelmän lopputuloksia hyödyntäen työssä esitettiin implementointiratkaisu palveluväylän avulla toteutettuna. Lisäksi luonnosteltiin ehdotusta siitä, miten kohdeyritys voisi lähteä liikkeelle palveluarkkitehtuurin soveltamisessa.
Resumo:
This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.
Resumo:
A (mainly votive) missal consisting of seven distinct parts. Put together in several stages, somewhat haphazardly. Parts II and III are probably the oldest parts. The final stage in the composition of the book is probably the addition of part VII. Part II belongs in the same liturgical tradition as C.ö.IV.7 (Oripään Missale I), probably that of Diocese of Linköping. Part III, a votive missal, is an informal copy of a book that would most probably have been used close to a Swedish cathedral (Linköping?). How the present book found its way to Oripää chapel is not known.
Resumo:
Abstract Software product metrics aim at measuring the quality of software. Modu- larity is an essential factor in software quality. In this work, metrics related to modularity and especially cohesion of the modules, are considered. The existing metrics are evaluated, and several new alternatives are proposed. The idea of cohesion of modules is that a module or a class should consist of related parts. The closely related principle of coupling says that the relationships between modules should be minimized. First, internal cohesion metrics are considered. The relations that are internal to classes are shown to be useless for quality measurement. Second, we consider external relationships for cohesion. A detailed analysis using design patterns and refactorings confirms that external cohesion is a better quality indicator than internal. Third, motivated by the successes (and problems) of external cohesion metrics, another kind of metric is proposed that represents the quality of modularity of software. This metric can be applied to refactorings related to classes, resulting in a refactoring suggestion system. To describe the metrics formally, a notation for programs is developed. Because of the recursive nature of programming languages, the properties of programs are most compactly represented using grammars and formal lan- guages. Also the tools that were used for metrics calculation are described.