18 resultados para Lambda iteration
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
This research has been focused at the development of a tuned systematic design methodology, which gives the best performance in a computer aided environment and utilises a cross-technological approach, specially tested with and for laser processed microwave mechanics. A tuned design process scheme is also presented. Because of the currently large production volumes of microwave and radio frequency mechanics even slight improvements of design methodologies or manufacturing technologies would give reasonable possibilities for cost reduction. The typical number of required iteration cycles could be reduced to one fifth of normal. The research area dealing with the methodologies is divided firstly into a function-oriented, a performance-oriented or a manufacturability-oriented product design. Alternatively various approaches can be developed for a customer-oriented, a quality-oriented, a cost-oriented or an organisation-oriented design. However, the real need for improvements is between these two extremes. This means that the effective methodology for the designers should not be too limited (like in the performance-oriented design) or too general (like in the organisation-oriented design), but it should, include the context of the design environment. This is the area where the current research is focused. To test the developed tuned design methodology for laser processing (TDMLP) and the tuned optimising algorithm for laser processing (TOLP), seven different industrial product applications for microwave mechanics have been designed, CAD-modelled and manufactured by using laser in small production series. To verify that the performance of these products meets the required level and to ensure the objectiveness ofthe results extensive laboratory tests were used for all designed prototypes. As an example a Ku-band horn antenna can be laser processed from steel in 2 minutes at the same time obtaining a comparable electrical performance of classical aluminium units or the residual resistance of a laser joint in steel could be limited to 72 milliohmia.
Resumo:
Diplomityössä suunniteltiin, rakennettiin ja mitattiin laajakaistainen antennielementti lineaariseen antenniryhmään. Elementti toimii mikroaaltoalueella, ja sen kaistanleveys on noin 4,8:1. Elementti koostuu kaksipuolisesta eksponentiaalisesti taperoidusta rakoantennista eli Vivaldi-antennista ja laajakaistaisesta siirtymästä liuskajohdosta kaksipuoliseen rakojohtoon. Elementin koko pienimmällä käyttötaajuudella on noin 0,31 lambda kertaa 0,34 lambda, josta antennitorven koko on vain noin 0,21 lambda kertaa 0,21 lambda. Elementti suunniteltiin HFSS-simulointiohjelman avulla ja rakennettiin kahdesta erillisestä piirilevystä puristamalla nämä yhteen alumiinisella kehyksellä. Mittauksilla varmistettiin elementin toiminta ja simulointien luotettavuus. Osoitettiin, että elementti voidaan suunnitella simulointiohjelman avulla ja rakentaa työssä käytetyllä tavalla. Osoitettiin myös, että tarvittavaa mitoitussimulointien määrää voidaan vähentää yhdistämällä erikseen mitoitetut rakoantenni ja siirtymä. Lisäksi simuloinnein osoitettiin, että elementti toimii myös ryhmässä ja että sen toimintaa voidaan parantaa kehyksen avulla.
Prosessihyötysuhteen parantamiskohteiden kartoitus painevesireaktorityyppisessä ydinvoimalaitoksessa
Resumo:
Työn tavoitteena on kartoittaa painevesireaktorityyppisen ydinvoimalaitoksen prosessihyötysuhteen parantamiskohteita. Aluksi kirjallisuudesta etsitään hyötysuhteen parantamiskeinoja ideaalisessa höyryvoimalaitosprosessissa. Näistä valitaan sopivimmat tarkastelun kohteeksi todellisessa voimalaitoksessa: syöttöveden esilämmityksen tehostaminen väliottohöyryvirtausta kasvattamalla ja syöttöveden esilämmittimen lämmönsiirtopintaa lisäämällä. Tarkastelussa pyritään löytämään paras mahdollinen hyötysuhde väliottohöyrylinjojen putkikokoa sekä esilämmittimien putkien lukumäärää muuttamalla. Diskreetin optimoinnin iteraatioaskel määritetään hyötysuhteen osittaisderivaattojen avulla. Tehtäviä muutoksia simuloidaan APROS-simulointiohjelmalla, jossa käytetään Loviisan voimalaitoksesta tehtyä mallia VVER-440. Työssä havaittiin, että pelkkiä väliottohöyrylinjojen putkikokoja – ja massavirtaa – kasvattamalla Loviisan voimalaitoksen hyötysuhdetta voidaan parantaa parhaimmillaan 32,75%:sta 32,85%:iin. Syöttöveden esilämmittimien lämmönsiirtopintaa lisäämällä saadaan suurempi parannus hyötysuhteeseen: 32,75%:sta 32,99%:iin. Näissä tapauksissa muutettiin kaikkia väliottohöyrylinjoja tai syöttöveden esilämmittimien lämpöpintoja. Työssä tarkasteltiin myös joitakin pienempiä muutoskohteita, joista paras hyötysuhteen kasvu saatiin korkeapaine-esilämmittimien lämmönsiirtopintaa kasvattamalla sekä toisen väliottohöyrylinjan (RD12) ja sitä vastaavan syöttöveden esilämmittimen muutosten yhteisvaikutuksena.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
A model to solve heat and mass balances during the offdesign load calculations was created. These equations are complex and nonlinear. The main new ideas used in the created offdesign model of a kraft recovery boiler are the use of heat flows as torn iteration variables instead of the current practice of using the mass flows, vectorizing equation solving, thus speeding up the process, using non dimensional variables for solving the multiple heat transfer surface problem and using a new procedure for calculating pressure losses. Recovery boiler heat and mass balances are reduced to vector form. It is shown that these vectorized equations can be solved virtually without iteration. The iteration speed is enhanced by the use of the derived method of calculating multiple heat transfer surfaces simultaneously. To achieve this quick convergence the heat flows were used as the torn iteration parameters. A new method to handle pressure loss calculations with linearization was presented. This method enabled less time to be spent calculating pressure losses. The derived vector representation of the steam generator was used to calculate offdesign operation parameters for a 3000 tds/d example recovery boiler. The model was used to study recovery boiler part load operation and the effect of the black liquor dry solids increase on recovery boiler dimensioning. Heat flows to surface elements for part load calculations can be closely approximated with a previously defined exponent function. The exponential method can be used for the prediction of fouling in kraft recovery boilers. For similar furnaces the firing of 80 % dry solids liquor produces lower hearth heat release rate than the 65 % dry solids liquor if we fire at constant steam flow. The furnace outlet temperatures show that capacity increase with firing rate increase produces higher loadings than capacity increase with dry solids increase. The economizers, boiler banks and furnaces can be dimensioned smaller if we increase the black liquor dry solids content. The main problem with increased black liquor dry solids content is the decrease in the heat available to superheat. Whenever possible the furnace exit temperature should be increased by decreasing the furnace height. The increase in the furnace exit temperature is usually opposed because of fear of increased corrosion.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Modern sophisticated telecommunication devices require even more and more comprehensive testing to ensure quality. The test case amount to ensure well enough coverage of testing has increased rapidly and this increased demand cannot be fulfilled anymore only by using manual testing. Also new agile development models require execution of all test cases with every iteration. This has lead manufactures to use test automation more than ever to achieve adequate testing coverage and quality. This thesis is separated into three parts. Evolution of cellular networks is presented at the beginning of the first part. Also software testing, test automation and the influence of development model for testing are examined in the first part. The second part describes a process which was used to implement test automation scheme for functional testing of LTE core network MME element. In implementation of the test automation scheme agile development models and Robot Framework test automation tool were used. In the third part two alternative models are presented for integrating this test automation scheme as part of a continuous integration process. As a result, the test automation scheme for functional testing was implemented. Almost all new functional level testing test cases can now be automated with this scheme. In addition, two models for integrating this scheme to be part of a wider continuous integration pipe were introduced. Also shift from usage of a traditional waterfall model to a new agile development based model in testing stated to be successful.
Resumo:
Tässä työssä tarkastellaan CE-merkintään vaadittavia teknisen tuotteistamisen vaiheita käyttäen esimerkkinä painesuodattimen automatisoidun kankaanvaihtolaitteen suunnitteluprosessia. Työssä selvitetään, mitä vaihtoehtoja on painesuodattimen lisälaitteiden luokitteluksi, että ne saadaan tuotteistettua Euroopan talousalueella (ETA). Esimerkkinä käytettävä kankaanvaihtolaite on suunniteltu käyttäen järjestelmällisen koneensuunnittelun menetelmää. CE-merkinnän vaatima riskianalyysi on tehty laitteelle standardin SFS-EN ISO 12100:2010 mukaisesti. Tuloksena saatu laitteen prototyyppi täyttää pääosin laitteelle asetettavat vaatimukset. Kustannusarvio ylittää kuitenkin toivotun omakustannehinnan valoverhojen suhteellisen kalliin hinnan takia. Kustannusarvion mukaan prototyyppi voidaan kuitenkin valmistaa edullisesti, sillä valoverhot eivät ole pakollisia laitteen toiminnallisissa testeissä. Ennen tuotteistamista valoverhojen korvaamisen mahdollisuutta muulla turvatekniikalla on kuitenkin tutkittava. Suunnitteluvaiheen jälkeen laitteen turvallisuuden voidaan todeta olevan vähintään riittävällä tasolla. Riskianalyysi on kuitenkin päivitettävä dokumentti, ja laitteen turvallisuus täytyy varmistaa prototyyppiä testaamalla. Työn perusteella voidaan todeta, että huomioimalla laitteen mahdollisesti aiheuttamat vaaratilanteet jo tuotesuunnittelun alussa, voidaan tuotekehitysprosessia nopeuttaa. Tunnistamalla vaaratilanteet suunnittelun varhaisessa vaiheessa voidaan vähentää riskien määrää, ja siten tarvetta riskien pienentämiselle. Näin vähennetään rakenteen suunnittelun ja riskianalyysin iterointikierrosten määrää, jolloin myös tuotteistamisprosessi nopeutuu.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Työssä kehitettiin suurnopeuskäyttöön soveltuva kestomagnetoitu roottori olemassa olevan induktiokoneen staattorirunkoon. Kehitystyön tarkoituksena oli selvittää roottorin mekaaniset raja-arvot, kuten maksimi kehänopeus. Samalla otettiin kantaa myös tarvittaviin analysointi- ja mitoitusmenetelmiin. Maksimi kehänopeuden, laakeroinnin ja roottorin skaalattavuuden selvittäminen edellytti myös tarkkaa materiaaliselvitystä ja optimointia. Tästä syystä työn aikana tehtiin tiivistä yhteistyötä materiaalitoimittajien kanssa. Työn tuloksena syntyi uusi menetelmä toteuttaa radiaalisen magneettivuon luova kestomagneettiroottori 200 m/s kehänopeudelle. Suunniteltua roottoriratkaisua käytetään testausroottorina, jolla selvitetään valmistuksen, kokoonpanon ja sähkötehon rajoitteet käytännössä. Suunnittelutyö edellyttikin jatkuvaa iterointia sähkösuunnittelun ja roottorin osien valmistajien kanssa, jotta löydettiin paras kompromissiratkaisu roottorin prototyyppiin. Tämän seurauksena saatiin luotua varsin tarkat suunnittelu- ja analysointiraja-arvot kestomagneettiroottorin tuotteistettavia versioita varten.
Resumo:
In this Master Thesis the characteristics of the chosen fractal microstrip antennas are investigated. For modeling has been used the structure of the square Serpinsky fractal curves. During the elaboration of this Master thesis the following steps were undertaken: 1) calculation and simulation of square microstrip antennа, 2) optimizing for obtaining the required characteristics on the frequency 2.5 GHz, 3) simulation and calculation of the second and third iteration of the Serpinsky fractal curves, 4) radiation patterns and intensity distribution of these antennas. In this Master’s Thesis the search for the optimal position of the port and fractal elements was conducted. These structures can be used in perspective for creation of antennas working at the same time in different frequency range.
Resumo:
The objective of this thesis is to better understand customer’s role in lean startup methodology. The aim is to find out how customers are involved in lean startup methodology implantation and increase the likelihood of new venture survival. This study emphasizes the usage of customers in shaping of new product development processes within companies, through iteration and constant communication. This communication facilitates the development of features that are requested by the customers and enhances the prospects of the new venture. The empirical part of the study is a single qualitative case study that uses action research to implement the lean startup methodology into a pre-revenue venture and examines its customer involvement processes. The studied case company is Karaoke d.o.o., developing a game called kParty. The study used the theory discussed in the literature review: customer involvement (in the survey and interviews conducted for the lean startup methodology), lean principles (through the implementation of lean startup methodology) and lean startup methodology, which are the central building parts of this thesis as a whole. The thesis contributes to the understanding of customer involvement in lean startup methodology, while giving practical implications of customer orientation and product market fitting.
Resumo:
The goal of this research – which is to critically analyze current theories and methods of intangible assets evaluation and potentially develop and test new methodology based on the practical example(s) in the IT industry. Having this goal in mind the main research questions in this paper will be: What are advantages and disadvantages of the current practices of measurement intellectual capital or valuation of intangible assets? How to properly measure intellectual capital in IT? Resulting method exhibits a new unique approach to the IC measurement and potentially even larger field of application. Despite the fact that in this particular research, I focused my attention on IT (Software and Internet services cluster – to be exact), the logic behind the method is applicable within any industry since the method is designed to be fully compliant with measurement theory and thus can be properly scaled for any application. Building a new method is a difficult and iterative process: in the current iteration the method stands out as rather a theoretical concept rather than a business tool, however even current concept totally fulfills its purpose as a benchmarking tool for measuring intellectual capital in IT industry.
Resumo:
The context of this study is corporate e-learning, with an explicit focus on how digital learning design can facilitate self-regulated learning (SRL). The field of e-learning is growing rapidly. An increasing number of corporations use digital technology and elearning for training their work force and customers. E-learning may offer economic benefits, as well as opportunities for interaction and communication that traditional teaching cannot provide. However, the evolving variety of digital learning contexts makes new demands on learners, requiring them to develop strategies to adapt and cope with novel learning tools. This study derives from the need to learn more about learning experiences in digital contexts in order to be able to design these properly for learning. The research question targets how the design of an e-learning course influences participants’ self-regulated learning actions and intentions. SRL involves learners’ ability to exercise agency in their learning. Micro-level SRL processes were targeted by exploring behaviour, cognition, and affect/motivation in relation to the design of the digital context. Two iterations of an e-learning course were tested on two groups of participants (N=17). However, the exploration of SRL extends beyond the educational design research perspective of comparing the effects of the changes to the course designs. The study was conducted in a laboratory with each participant individually. Multiple types of data were collected. However, the results presented in this thesis are based on screen observations (including eye tracking) and video-stimulated recall interviews. These data were integrated in order to achieve a broad perspective on SRL. The most essential change evident in the second course iteration was the addition of feedback during practice and the final test. Without feedback on actions there was an observable difference between those who were instruction-directed and those who were self-directed in manipulating the context and, thus, persisted whenever faced with problems. In the second course iteration, including the feedback, this kind of difference was not found. Feedback provided the tipping point for participants to regulate their learning by identifying their knowledge gaps and to explore the learning context in a targeted manner. Furthermore, the course content was consistently seen from a pragmatic perspective, which influenced the participants’ choice of actions, showing that real life relevance is an important need of corporate learners. This also relates to assessment and the consideration of its purpose in relation to participants’ work situation. The rigidity of the multiple choice questions, focusing on the memorisation of details, influenced the participants to adapt to an approach for surface learning. It also caused frustration in cases where the participants’ epistemic beliefs were incompatible with this kind of assessment style. Triggers of positive and negative emotions could be categorized into four levels: personal factors, instructional design of content, interface design of context, and technical solution. In summary, the key design choices for creating a positive learning experience involve feedback, flexibility, functionality, fun, and freedom. The design of the context impacts regulation of behaviour, cognition, as well as affect and motivation. The learners’ awareness of these areas of regulation in relation to learning in a specific context is their ability for design-based epistemic metareflection. I describe this metareflection as knowing how to manipulate the context behaviourally for maximum learning, being metacognitively aware of one’s learning process, and being aware of how emotions can be regulated to maintain volitional control of the learning situation. Attention needs to be paid to how the design of a digital learning context supports learners’ metareflective development as digital learners. Every digital context has its own affordances and constraints, which influence the possibilities for micro-level SRL processes. Empowering learners in developing their ability for design-based epistemic metareflection is, therefore, essential for building their digital literacy in relation to these affordances and constraints. It was evident that the implementation of e-learning in the workplace is not unproblematic and needs new ways of thinking about learning and how we create learning spaces. Digital contexts bring a new culture of learning that demands attitude change in how we value knowledge, measure it, define who owns it, and who creates it. Based on the results, I argue that digital solutions for corporate learning ought to be built as an integrated system that facilitates socio-cultural connectivism within the corporation. The focus needs to shift from designing static e-learning material to managing networks of social meaning negotiation as part of a holistic corporate learning ecology.