46 resultados para visual programming
Resumo:
The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.
Resumo:
The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.
Resumo:
Western societies have been faced with the fact that overweight, impaired glucose regulation and elevated blood pressure are already prevalent in pediatric populations. This will inevitably mean an increase in later manifestations of cardio-metabolic diseases. The dilemma has been suggested to stem from fetal life and it is surmised that the early nutritional environment plays an important role in the process called programming. The aim of the present study was to characterize early nutritional determinants associating with cardio-metabolic risk factors in fetuses, infants and children. Further, the study was designated to establish whether dietary counseling initiated in early pregnancy can modify this cascade. Healthy mother-child pairs (n=256) participating in a dietary intervention study were followed from early pregnancy to childhood. The intervention included detailed dietary counseling by a nutritionist targeting saturated fat intake in excess of recommendations and fiber consumption below recommendations. Cardio-metabolic programming was studied by characterizing the offspring’s cardio-metabolic risk factors such as over-activation of the autonomic nervous system, elevated blood pressure and adverse metabolic status (e.g. serum high split proinsulin concentration). Fetal cardiac sympathovagal activation was measured during labor. Postnatally, children’s blood pressure was measured at six-month and four-year follow-up visits. Further, infants’ metabolic status was assessed by means of growth and serum biomarkers (32-33 split proinsulin, leptin and adiponectin) at the age of six months. This study proved that fetal cardiac sympathovagal activity was positively associated with maternal pre-pregnancy body mass index indicating adverse cardio-metabolic programming in the offspring. Further, a reduced risk of high split proinsulin in infancy and lower blood pressure in childhood were found in those offspring whose mothers’ weight gain and amount and type of fats in the diet during pregnancy were as recommended. Of note, maternal dietary counseling from early pregnancy onwards could ameliorate the offspring’s metabolic status by reducing the risk of high split proinsulin concentration, although it had no effect on the other cardio-metabolic markers in the offspring. At postnatal period breastfeeding proved to entail benefits in cardio-metabolic programming. Finally, the recommended dietary protein and total fat content in the child’s diet were important nutritional determinants reducing blood pressure at the age of four years. The intrauterine and immediate postnatal period comprise a window of opportunity for interventions aiming to reduce the risk of cardio-metabolic disorders and brings the prospect of achieving health benefits over one generation.
Resumo:
Local features are used in many computer vision tasks including visual object categorization, content-based image retrieval and object recognition to mention a few. Local features are points, blobs or regions in images that are extracted using a local feature detector. To make use of extracted local features the localized interest points are described using a local feature descriptor. A descriptor histogram vector is a compact representation of an image and can be used for searching and matching images in databases. In this thesis the performance of local feature detectors and descriptors is evaluated for object class detection task. Features are extracted from image samples belonging to several object classes. Matching features are then searched using random image pairs of a same class. The goal of this thesis is to find out what are the best detector and descriptor methods for such task in terms of detector repeatability and descriptor matching rate.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Resumo:
Väsytyskokeita on väsymisilmiön keksimisestä lähtien tehty pääasiallisesti vakioamplitudisella kuormituksella. Paremmin todellisuutta kuvaavaan testitilanteeseen päästään kuitenkin vain käyttämällä testattavan rakenteen reaalikuormitusta simuloivaa muuttuva-amplitudista kuormitusta. Tällaisen kuormituksen testaaminen käytännössä on kuitenkin huomattavasti vaikeampaa kuin perinteisen vakioamplitudisen kuormituksen, koska muuttuva-amplitudisen kuormituksen spektri on ensin kehitettävä jostain – joko käytännön mittausten kautta tai rakenteen käyttötilaa analysoimalla. Myöskään tiedossa olevan spektrin tuottaminen käytännön kokeissa ei ole aivan yksinkertaista. Tässä kandidaatintyössä pyrittiin ratkaisemaan näitä ongelmia suunnittelemalla ja toteuttamalla testiohjelmisto, joka pystyy sekä generoimaan että käytännössä toistamaan käyttäjän haluaman kuormitusspektrin laboratoriokokeissa. Jälkimmäistä varten oli olemassa ohjelma, jota haluttiin hyödyntää tässä työssä. Tehtävä jaettiin kolmeen osioon: kuormitusspektrien generoiminen, kuormitusspektrien yhdistäminen ja lopuksi spektrien toistaminen itse väsytyskokeessa. Kahdessa ensimmäisessä osiossa käytettiin ohjelmointiympäristönä Matlab-ohjelmaa; kolmannessa käytettiin pohjana olemassa olevaa väsytyskoeohjelmaa ja käytännön ohjelmointi suoritettiin näin ollen ANSI C –kielellä käyttäen kääntäjänä Microsoft Visual Studio 6.0:aa. Alkuperäinen väsytyskoeohjelma vaati useita merkittäviä muutoksia, ennen kuin se soveltui käytettäväksi tässä yhteydessä. Työssä on kuvattu periaatetasolla ohjelmien suunnittelu- ja toteuttamisvaiheet. Lisäksi työn on tarkoitus toimia yksinkertaisena käyttöohjeena ja opastuksena koko ohjelmiston käyttöön.
Resumo:
In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.
Resumo:
The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.
Resumo:
Tässä työssä on pyritty kartoittamaan mahdollisuudet omatoimiseen Voyager-kirjastojärjestelmän aineistotietokantojen ja asiakasrekisterien yhdistelyyn. Lähtökohtana on ollut oletus, että kohdejärjestelmän tietokantaan ei ole oikeuksia eikä sopimusteknistä mahdollisuuttakaan kirjoittaa tietoja suoraan kyselykielellä. Järjestelmän dokumentaatiota sekä verkostoa hyödyntämällä olen pyrkinyt kartoittamaan mahdollisuudet kaiken toiminnallisuuden vaatiman datan siirtoon. Hyödyntämällä järjestelmän rajapintoja, voidaan saavuttaa kustannussäästöjä sekä joustavuutta työn suorittamisen aikataulutukseen. Bibliografisen datan siirtoon Voyager-kirjastojärjestelmässä on mahdollisuus hyödyntää palvelimella eräajona suoritettavaa ohjelmaa. Tässä eräajossa voidaan siirtää sekä bibliografiset tietueet että varastotietueet. Nidetietojen kirjoittamiseksi kohteena olevaan tietokantaan käytetään Visual Studio -sovellusta, joka hyödyntää luettelointirajapintaa. Asiakastietojen siirtoon on mahdollista hyödyntää palvelimella suoritettavaa eräajoa, jonka syötteeksi kirjoitetaan määrämittainen syötetiedosto. Asiakastietueisiin sidotut lainatiedot voidaan siirtää kohdetietokantaan asiakasohjelman offline-lainaustoiminnolla.
Resumo:
This thesis was part of lean adaptation project started at Outotec Lappeenranta factory in early 2013. The purpose of this thesis was to develop and propose lean tools that could be used in daily management, visual management and continuous improvement. This thesis was “outsiders” view, and as such, did not study the current processes deeply. As result of this thesis, two different Daily Management -boards were designed, one for parallel processes and one for sequential processes. In addition, methods of doing continuous improvement and daily task accountability were framed and standard work for the leaders outlined. The tools presented in this thesis are general tools which support work in lean environment. They are visual and, if used correctly, they provide a basis from which continuous improvement can be done. Lean philosophy emphasizes the deep understanding of the current situation and it would be against the lean principles to blindly implement anything developed “on the outside”. The tools presented should be reviewed and modified further by the people working on the factory floor.
Resumo:
Ett ämne som väckt intresse både inom industrin och forskningen är hantering av kundförhållanden (CRM, eng. Customer Relationship Management), dvs. en kundorienterad affärsstrategi där företagen från att ha varit produktorienterade väljer att bli mera kundcentrerade. Numera kan kundernas beteende och aktiviteter lätt registreras och sparas med hjälp av integrerade affärssystem (ERP, eng. Enterprise Resource Planning) och datalager (DW, eng. Data Warehousing). Kunder med olika preferenser och köpbeteende skapar sin egen ”signatur” i synnerhet via användningen av kundkort, vilket möjliggör mångsidig modellering av kundernas köpbeteende. För att få en översikt av kundernas köpbeteende och deras lönsamhet, används ofta kundsegmentering som en metod för att indela kunderna i grupper utgående från deras likheter. De mest använda metoderna för kundsegmentering är analytiska modeller konstruerade för en viss tidsperiod. Dessa modeller beaktar inte att kundernas beteende kan förändras med tiden. I föreliggande avhandling skapas en holistisk översikt av kundernas karaktär och köpbeteende som utöver de konventionella segmenteringsmodellerna även beaktar dynamiken i köpbeteendet. Dynamiken i en kundsegmenteringsmodell innefattar förändringar i segmentens struktur och innehåll, samt förändringen av individuella kunders tillhörighet i ett segment (s.k migrationsanalyser). Vardera förändringen modelleras, analyseras och exemplifieras med visuella datautvinningstekniker, främst med självorganiserande kartor (SOM, eng. Self-Organizing Maps) och självorganiserande tidskartor (SOTM), en vidareutveckling av SOM. Visualiseringen anteciperas underlätta tolkningen av identifierade mönster och göra processen med kunskapsöverföring mellan den som gör analysen och beslutsfattaren smidigare. Asiakkuudenhallinta (CRM) eli organisaation muuttaminen tuotepainotteisesta asiakaskeskeiseksi on herättänyt mielenkiintoa niin yliopisto- kuin yritysmaailmassakin. Asiakkaiden käyttäytymistä ja toimintaa pystytään nykyään helposti tallentamaan ja varastoimaan toiminnanohjausjärjestelmien ja tietovarastojen avulla; asiakkaat jättävät jatkuvasti piirteistään ja ostokäyttäytymisestään kertovia tietojälkiä, joita voidaan analysoida. On tavallista, että asiakkaat poikkeavat toisistaan eri tavoin, ja heidän mieltymyksensä kuten myös ostokäyttäytymisensä saattavat olla hyvinkin erilaisia. Asiakaskäyttäytymisen monimuotoisuuteen ja tuottavuuteen paneuduttaessa käytetäänkin laajalti asiakassegmentointia eli asiakkaiden jakamista ryhmiin samankaltaisuuden perusteella. Perinteiset asiakassegmentoinnin ratkaisut ovat usein yksittäisiä analyyttisia malleja, jotka on tehty tietyn aikajakson perusteella. Tämän vuoksi ne monesti jättävät huomioimatta sen, että asiakkaiden käyttäytyminen saattaa ajan kuluessa muuttua. Tässä väitöskirjassa pyritäänkin tarjoamaan holistinen kuva asiakkaiden ominaisuuksista ja ostokäyttäytymisestä tarkastelemalla kahta muutosvoimaa tiettyyn aikarajaukseen perustuvien perinteisten segmentointimallien lisäksi. Nämä kaksi asiakassegmentointimallin dynamiikkaa ovat muutokset segmenttien rakenteessa ja muutokset yksittäisten asiakkaiden kuulumisessa ryhmään. Ensimmäistä dynamiikkaa lähestytään ajallisen asiakassegmentoinnin avulla, jossa visualisoidaan ajan kuluessa tapahtuvat muutokset segmenttien rakenteissa ja profiileissa. Toista dynamiikkaa taas lähestytään käyttäen nk. segmenttisiirtymien analyysia, jossa visuaalisin keinoin tunnistetaan samantyyppisesti segmentistä toiseen vaihtavat asiakkaat. Visualisoinnin tehtävänä on tukea havaittujen kaavojen tulkitsemista sekä helpottaa tiedonsiirtoa analysoijan ja päättäjien välillä. Visuaalisia tiedonlouhintamenetelmiä, kuten itseorganisoivia karttoja ja niiden laajennuksia, käytetään osoittamaan näiden menetelmien hyödyllisyys sekä asiakkuudenhallinnassa yleisesti että erityisesti asiakassegmentoinnissa.
Resumo:
Työn tavoitteena on sovittaa Qt opetussuunnitelmaan. Työ sisältää Qt:n lyhyen historian sekä katsauksen sen nykytilaan. Nykytilakatsaus sisältää kolme näkökulmaa: miten ja missä Qt:ta voidaan käyttää, sekä sen käyttötarkoitukset teollisuudessa ja opetuksessa. Työn tuloksena syntyy luentodemonstraatiota varten pieni ohjelma, joka on luotu C++:n ja Qt Designerin avulla ja käyttää olennaisia käyttöliittymäkirjaston olioita. Toisena tuotteena työssä syntyy luonnos Lappeenrannan Teknillisen Yliopiston ohjelmointikursseista, joissa Qt:ta voitaisiin käyttää avustamaan opiskelijoita näkemään, miten graafinen ohjelma luodaan sekä valmentaa heitä ymmärtämään viitekehyksien ja graafisten kirjastojen tuomat edut.
Resumo:
Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014