15 resultados para 005 Computer programming, programs
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tyn tavoitteena on selvitt, minklaisia mahdollisuuksia digitaalinen tarinankerronta antaa peruskouluissa. Tyss ksitelln digitaalinen tarinankerronta ja se, miten sit hydynnetn opetuksessa. Tyn taustana on opetushallituksen laatima opetussuunnitelma 2016. Opetussuunnitelmassa uutena on ohjelmointi, jota ksitelln tyss vhn tarkemmin. Tulevaisuudessa teknologia, kuten koodaus ja robotiikka sek listty todellisuus voivat tukea luovuutta, innovatiivisuutta ja ongelmanratkaisukyky. Ty on kirjallisuuskatsaus, jossa aihetta analysoidaan lhdekirjallisuuden avulla. Digitaalisella tarinankerronnalla luokkahuoneessa on rajattomat mahdollisuudet. Digitaalinen tarinankerronta tukee uuden opetussuunnitelman tavoitteita. Digitaalisen tarinankerronnan avulla voidaan osallistaa lapset oppimisprosessiin, heidn omia vahvuuksia saadaan esille sek he psevt itse oivaltamaan ja ratkomaan ongelmia. Ohjelmointi, robotiikka ja listty todellisuus antavat uusia tykaluja opetukseen. Ohjelmointi on lyllisesti motivoiva ajattelutapa. Teknologian kytt opetuksessa lis opiskelumotivaatiota ja yhdess tekemisen iloa.
Resumo:
The main objective of this master's thesis is to study robot programming using simulation software, and also how to embed the simulation software into company's own robot controlling software. The further goal is to study a new communication interface to the assembly line's components -more precisely how to connect the robot cell into this new communication system. Conveyor lines are already available where the conveyors use the new communication standard. The robot cell is not yet capable of communicating with to other devices using the new communication protocols. The main problem among robot manufacturers is that they all have their own communication systems and programming languages. There has not been any common programming language to program all the different robot manufacturers robots, until the RRS (Realistic Robot Simulation) standards were developed. The RRS - II makes it possible to create the robot programs in the simulation software and it gives a common user interface for different robot manufacturers robots. This thesis will present the RRS - II standard and the robot manufacturers situation for the RRS - II support. Thesis presents how the simulation software can be embedded into company's own robot controlling software and also how the robot cell can be connected to the CAMX (Computer Aided Manufacturing using XML) communication system.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at bo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
Ohjelmointitaito on asia, jonka oppimisesta ja opettamisesta voidaan olla montaa mielt, eik yht oikeaa tapaa toteuttaa ohjelmoinnin opetusta tunnu olevan olemassa. Se on kuitenkin selv, ett jotkin menetelmt ja tykalut tuntuvat olevan parempia kuin toiset. Lukuvuoden 2005-2006 ptteeksi Lappeenrannan teknillinen yliopisto ptti pivitt ohjelmoinnin perusopetusta, ja kokeili siirtymist Python-ohjelmointikieleen ohjelmoinnin alkeiskursseilla. Koska kurssin varsinaiset muutokset keskittyivt tekniseen infrastruktuuriin, tutustuttiin alustavassa kirjallisuustutkimuksessa ensin erilaisiin lhestymistapoihin,aiempiin tapauksiin sek mielekkiden tykalujen lytmiseen. Tss diplomityss perehdytn ohjelmoinnin opetuksen tykaluihin sek erityisesti Python-ohjelmointikielen hydyntmiseen ohjelmoinnin perusopetuksessa. Diplomity esittelee useita lhestymistapoja sek keskittyy tutkimaan Pythonin soveltuvuutta alkeisopetuksen kytttarkoituksiin. Diplomity tutustuu mys Lappeenrannassa jrjestetyn ohjelmoinnin perusteiden kurssin tuloksiin, ja analysoi sit, pystyik Python-pohjainen kurssi toteuttamaan teknisen yliopiston sille asettamat vaatimukset. Lopuksi aineistosta analysoidaan jatkotutkimuksen tarpeita sek pyritn lytmn ne osa-alueet, joita niss jatkotutkimuksissa tulisi viel kehitt.
Resumo:
Diplomityn tarkoituksena on kehitt tietokoneohjelma putkilmmnsiirtimen vaippapuolen painehvin laskemiseksi. Ohjelmalla voidaan varmistaa lmmnsiirtimen mitoitusvaiheessa, ett vaippapuolen painehvi ei ylit sallittuja rajoja. Ohjelmatydent olemassa olevia mitoitusohjelmia. Tss diplomityss ksitelln ainoastaan hyryvoimalaitosprosesseissa kytettvi putkilmmnsiirtimi. Tyn kirjallisessa osassa on selvitetty periaate hyryvoimalaitosprosessista ja siin kytettvist putkilmmnsiirtimist sek esitetty putkilmmnsiirtimien rakenne, yleinen suunnittelu ja lmp- ja virtaustekninen mitoitus. Painehvin laskennassa kytetyt ja lmp- ja virtausteknist mitoitusta ksittelevss kappaleessa esitetyt yhtlt perustuvat Bell-Delawaren menetelmn. Painehvinlaskentaohjelma on toteutettu hyvksikytten Microsoft Excel taulukkolaskentaa ja Visual Basic -ohjelmointikielt. Painehvin laskenta perustuu segmenttivlilevyill varustetun putkilmmnsiirtimen vaippapuolen yksifaasivirtaukseen. Lmmnsiirtimen lauhdutinosan painehvi oletetaan merkityksettmksi, joten kokonaispainehvi muodostuu hyryn- ja lauhteenjhdyttimess. Kehitetty ohjelma on suunniteltu erityisesti lauhteenjhdyttimess muodostuvan painehvin laskentaan. Ohjelmalla laskettuja painehvin arvoja on verrattu todellisesta lmmnsiirtimest mitattuihin arvoihin. Lasketut arvotvastaavat hyvin mittaamalla saatuja, eik tuloksissa ilmene mitn systemaattista virhett. Ohjelma on valmis kytettvksi putkilmmnsiirtimien mitoitustykaluna. Diplomityn pohjalta on tehty ehdotukset ohjelman edelleen kehittmiseksi.
Resumo:
Tm kandidaatinty tutkii tietotekniikan perusopetuksessa keskeisen aiheen,ohjelmoinnin, alkeisopetusta ja siihen liittyvi ongelmia. Tyss perehdytn ohjelmoinnin perusopetusmenetelmiin ja opetuksen lhestymistapoihin, sek ratkaisuihin, joilla opetusta voidaan tehostaa. Nit ratkaisuja tyss ovat mm. ohjelmointikielen valinta, kytettvn kehitysympristn lytminen sek kurssia tukevien opetusapuvlineiden etsiminen. Lisksi kurssin lpivientiin liittyvien toimintojen, kuten harjoitusten ja mahdollisten viikkotehtvien valinta kuuluu osaksitt tyt. Ty itsessn lhestyy aihetta tutkimalla Pythonin soveltuvuutta ohjelmoinnin alkeisopetukseen mm. vertailemalla sit muihin olemassa oleviin yleisiin opetuskieliin, kuten C, C++ tai Java. Se tarkastelee kielen hyvi ja huonoja puolia, sek tutkii, voidaanko Pythonia hydynt luontevasti pasiallisena opetuskielen. Lisksi ty perehtyy siihen, mit kaikkea kurssilla tulisi opettaa, sek siihen, kuinka kurssin lpivienti olisi tehokkainta toteuttaa ja minklaiset tekniset puitteet kurssin toteuttamista varten olisi jrkev valita.
Resumo:
Diplomity tarkastelee sikeistetty ohjelmointia rinnakkaisohjelmoinnin ylemmll hierarkiatasolla tarkastellen erityisesti hypersikeistysteknologiaa. Tyss tarkastellaan hypersikeistyksen hyvi ja huonoja puolia sek sen vaikutuksia rinnakkaisalgoritmeihin. Tyn tavoitteena oli ymmrt Intel Pentium 4 prosessorin hypersikeistyksen toteutus ja mahdollistaa sen hydyntminen, miss se tuo suorituskyvyllist etua. Tyss kerttiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestej eri olosuhteissa (muistin ksittely, kntjn asetukset, ympristmuuttujat...). Tyss tarkasteltiin kahdentyyppisi algoritmeja: matriisioperaatioita ja lajittelua. Niss sovelluksissa on snnllinen muistinkyttkuvio, mik on kaksiterinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskyky. Syyn siihen on nykyaikaisten prosessorien erittin hyv raaka suorituskyky snnllist dataa ksiteltess, mutta muistiarkkitehtuuria rajoittaa vlimuistien koko ja useat puskurit. Kun ongelman koko ylitt tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvyst.
Resumo:
The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students opinions and performance, and what kind of implications there are from a teachers point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students independent learning regardless of time and place. In summary, we can use our courses resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new scientifically proven innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.
Resumo:
In this thesis, a computer software for dening the geometry for a centrifugal compressor impeller is designed and implemented. The project is done under the supervision of Laboratory of Fluid Dynamics in Lappeenranta University of Technology. This thesis is similar to the thesis written by Tomi Putus (2009) in which a centrifugal compressor impeller ow channel is researched and commonly used design practices are reviewed. Putus wrote a computer software which can be used to dene impellers three-dimensional geometry based on the basic geometrical dimensions given by a preliminary design. The software designed in this thesis is almost similar but it uses a different programming language (C++) and a different way to dene the shape of the impeller meridional projection.
Resumo:
In this thesis, simple methods have been sought to lower the teachers threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the students culture and learning style. In this thesis, a simple self-assessment scale that is based on Blooms revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students knowledge is.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
New emerging technologies in the recent decade have brought new options to cross platform computer graphics development. This master thesis took a look for cross platform 3D graphics development possibilities. All platform dependent and non real time solutions were excluded. WebGL and two different OpenGL based solutions were assessed via demo application by using most recent development tools. In the results pros and cons of the each solutions were noted.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Memristori on yksi elektroniikan peruskomponenteista vastuksen, kondensaattorin ja kelan lisksi. Se on passiivinen komponentti, jonka teorian kehitti Leon Chua vuonna 1971. Kesti kuitenkin yli kolmekymment vuotta ennen kuin teoria pystyttiin yhdistmn kokeellisiin tuloksiin. Vuonna 2008 Hewlett Packard julkaisi artikkelin, jossa he vittivt valmistaneensa ensimmisen toimivan memristorin. Memristori eli muistivastus on resistiivinen komponentti, jonka vastusarvoa pystytn muuttamaan. Nimens mukaisesti memristori kykenee mys silyttmn vastusarvonsa ilman jatkuvaa virtaa ja jnnitett. Tyypillisesti memristorilla on vhintn kaksi vastusarvoa, joista kumpikin pystytn valitsemaan syttmll komponentille jnnitett tai virtaa. Tmn vuoksi memristoreita kutsutaankin usein resistiivisiksi kytkimiksi. Resistiivisi kytkimi tutkitaan nykyn paljon erityisesti niiden mahdollistaman muistiteknologian takia. Resistiivisist kytkimist rakennettua muistia kutsutaan ReRAM-muistiksi (lyhenne sanoista resistive random access memory). ReRAM-muisti on Flash-muistin tapaan haihtumaton muisti, jota voidaan shkisesti ohjelmoida tai tyhjent. Flash-muistia kytetn tll hetkell esimerkiksi muistitikuissa. ReRAM-muisti mahdollistaa kuitenkin nopeamman ja vhvirtaiseman toiminnan Flashiin verrattuna, joten se on tulevaisuudessa varteenotettava kilpailija markkinoilla. ReRAM-muisti mahdollistaa mys useammin bitin tallentamisen yhteen muistisoluun binrisen (0 tai 1) toiminnan sijaan. Tyypillisesti ReRAM-muistisolulla on kaksi rajoittavaa vastusarvoa, mutta niden kahden tilan vlille pystytn mahdollisesti ohjelmoimaan useampia tiloja. Muistisoluja voidaan kutsua analogisiksi, jos tilojen mr ei ole rajoitettu. Analogisilla muistisoluilla olisi mahdollista rakentaa tehokkaasti esimerkiksi neuroverkkoja. Neuroverkoilla pyritn mallintamaan aivojen toimintaa ja suorittamaan tehtvi, jotka ovat tyypillisesti vaikeita perinteisille tietokoneohjelmille. Neuroverkkoja kytetn esimerkiksi puheentunnistuksessa tai tekolytoteutuksissa. Tss diplomityss tarkastellaan Ta2O5 -perustuvan ReRAM-muistisolun analogista toimintaa piten mieless soveltuvuus neuroverkkoihin. ReRAM-muistisolun valmistus ja mittaustulokset kydn lpi. Muistisolun toiminta on harvoin tysin analogista, koska kahden rajoittavan vastusarvon vlill on usein rajattu mr tiloja. Tmn vuoksi toimintaa kutsutaan pseudoanalogiseksi. Mittaustulokset osoittavat, ett yksittinen ReRAM-muistisolu kykenee binriseen toimintaan hyvin. Joiltain osin yksittinen solu kykenee tallentamaan useampia tiloja, mutta vastusarvoissa on perkkisten ohjelmointisyklien vlill suurta vaihtelevuutta, joka hankaloittaa tulkintaa. Valmistettu ReRAM-muistisolu ei sellaisenaan kykene toimimaan pseudoanalogisena muistina, vaan se vaati rinnalleen virtaa rajoittavan komponentin. Mys valmistusprosessin kehittminen vhentisi yksittisen solun toiminnassa esiintyv varianssia, jolloin sen toiminta muistuttaisi enemmn pseudoanalogista muistia.