975 resultados para PCB balun circuits
Resumo:
Internet s’ha alçat en poc temps com el mitjà més utilitzat pels turistes per a planificar, organitzar i comprar un viatge, és per això que es proposa donar les mateixes facilitats en el destí. La Publicitat Dinàmica o “Digital Signage” és un nou servei de comunicació que consisteix en un conjunt de tecnologies i aplicacions informàtiques que permeten emetre missatges multimèdia i comunicar-se així d’una manera innovadora amb el públic objectiu de cada empresa, si s’afegeix un sistema independent, multimèdia i interactiu que pot utilitzar-se per a proporcionar informació i/o permetre la realització de transaccions es potencia al màxim el servei. D’aquesta manera es proposa crear una Xarxa Digital Multimèdia de Kioscs Interactius recolzats amb una pantalla de plasma per a la tecnologia Digital Signage. La ubicació escollida estratègicament és en un dels punts de major afluència turística, tal com l’entrada dels hotels. Així es tracta de crear circuits tancats en àrees geogràfiques on es troben els principals nuclis turístics de Mallorca. La possibilitat d’accedir a segments de població altament interessants per al producte o servei es multiplica al ser una manera fàcil, eficaç i altament suggestiva de promocionar el què es pretén. Un avantatge és la simplicitat de la infraestructura tecnològica que es necessita, el dispositiu mitjançant el qual es visualitzaran els missatges serà una pantalla de plasma convencional, i un terminal de punt de venda instal.lat en un lloc de pas. Cada mòdul està connectat a la xarxa ADSL mitjançant un servidor local a Internet. La connexió a la xarxa és imprescindible per a que el manteniment i actualització dels continguts es puguin efectuar remotament. L’objectiu principal d’aquest treball és estudiar la viabilitat de la implantació de la xarxa, mitjançant la realització d’un estudi de mercat on s’analitzen els grups claus per a la implantació: els hotelers, la indústria turística i el Govern Balear. S’identifiquen els beneficis que aportarà al nou servei i les repercussions que tendrà la seva instal.lació. Entre els resultats més destacats d’aquest estudi cal remarcar l’acceptació que ha tengut la idea entre els hotelers entrevistats i la resposta positiva de la indústria turística. Es reconeix: una millora de la imatge del sector, l’ús com a eina de promoció turística pel Govern, i la contribució a la sostenibilitat econòmica pel fet que augmenta la competitivitat de les empreses i això millora la qualitat del servei.
Resumo:
Un transmissor d’AM (modulació per amplitud), utilitza una de les moltes tècniques de modulació existents avui en dia. És molta la importància que té la modulació de senyals i aquests en són alguns exemples: -Facilita la propagació del senyal per cable o per aire. -Ordena l’espectre, distribuint Canals a les diferents informacions. -Disminueix la dimensió de les antenes. -Optimitza l’ample de banda de cada canal. -Evita interferències entre Canals. -Protegeix la informació de les degradacions per soroll. -Defineix la qualitat de la informació transmesa. L’objectiu principal d’aquest treball, serà realitzar un transmissor d’AM utilitzant components electrònics disponibles al mercat. Això es realitzarà mitjançant diversos procediments de disseny. Es realitzarà un procediment de disseny teòric, tot utilitzant els “datasheets” dels diferents components. Es realitzarà un procediment de disseny mitjançant la simulació, gràcies al qual es podrà provar el disseny del dispositiu i realitzar-ne algunes parts impossibles a reproduir teòricament. I finalment es realitzarà el dispositiu a la pràctica. Entre les conclusions més rellevants obtingudes en aquest treball, voldríem destacar la importància de la simulació per poder dissenyar circuits de radiofreqüència. En aquest treball s’ha demostrat que gràcies a una bona simulació, el primer prototip de dispositiu creat ens ha funcionat a la perfecció. D’altre banda, també comentar la importància d’un disseny adequat d’antena per poder aprofitar al màxim el rendiment del nostre dispositiu. Per concloure, la realització d’un aparell transmissor aporta unes nocions equilibrades d’electrònica i telecomunicacions importants per al disseny de dispositius de comunicació.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Marine mammals are exposed to persistent organic pollutants (POPs), which may be biotransformed to metabolites some of which are highly toxic. Both POPs and their metabolites may lead to adverse health effects, which have been studied using various biomarkers. Changes in endocrine homeostasis have been suggested to be sensitive biomarkers for contaminant-related effects. The overall objective of this doctoral thesis was to investigate biotransformation capacity of POPs and their potential endocrine disruptive effects in two contrasting ringed seal populations from the low contaminated Svalbard area and from the highly contaminated Baltic Sea. Biotransformation capacity was studied by determining the relationships between congener-specific patterns and concentrations of polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs), polybrominated diphenyl ethers (PBDEs) and their hydroxyl (OH)- and/or methylsulfonyl (MeSO2)-metabolites, and catalytic activities of hepatic xenobiotic-metabolizing phase I and II enzymes. The results suggest that the biotransformation of PCBs, PBDEs and toxaphenes in ringed seals depends on the congener-specific halogen-substitution pattern. Biotransformation products detected in the seals included OH-PCBs, MeSO2-PCBs and –DDE, pentachlorophenol, 4-OHheptachlorostyrene, and to a minor extent OH-PBDEs. The effects of life history state (moulting and fasting) on contaminant status and potential biomarkers for endocrine disruption, including hormone and vitamin homeostasis, were investigated in the low contaminated ringed seal population from Svalbard. Moulting/fasting status strongly affected thyroid, vitamin A and calcitriol homeostasis, body condition and concentrations of POPs and their OH-metabolites. In contrast, moulting/fasting status was not associated with variations in vitamin E levels. Endocrine disruptive effects on multiple endpoints were investigated in the two contrasting ringed seal populations. The results suggest that thyroid, vitamin A and calcitriol homeostasis may be affected by the exposure of contaminants and/or their metabolites in the Baltic ringed seals. Complex and non-linear relationships were observed between the contaminant levels and the endocrine variables. Positive relationships between circulating free and total thyroid hormone concentration ratios and OH-PCBs suggest that OH-PCBs may mediate the disruption of thyroid hormone transport in plasma. Species differences in thyroid and bone-related effects of contaminants were studied in ringed and grey seals from low contaminated references areas and from the highly contaminated Baltic Sea. The results indicate that these two species living at the same environment approximately at the same trophic level respond in a very different way to contaminant exposure. The results of this thesis suggest that the health status of the Baltic ringed seals has still improved during the last decade. PCB and DDE levels have decreased in these seals and the contaminant-related effects are different today than a decade ago. The health of the Baltic ringed seals is still suggested to be affected by the contaminant exposure. At the present level of the contaminant exposure the Baltic ringed seals seem to be at a zone where their body is able to compensate for the contaminant-mediated endocrine disruption. Based on the results of this thesis, several recommendations that could be applied on monitoring and assessing risk for contaminant effects are provided. Circulating OH-metabolites should be included in monitoring and risk assessment programs due to their high toxic potential. It should be noted that endogenous variables may have complex and highly variable responses to contaminant exposure including non-linear responses. These relationships may be further confounded by life history status. Therefore, it is highly recommended that when using variables related to endocrine homeostasis to investigate/monitor or assess the risk of contaminant effects in seals, the life history status of the animal should be carefully taken into consideration. This applies especially when using thyroid, vitamin A or calcitriolrelated parameters during moulting/fasting period. Extrapolations between species for assessing risk for contaminant effects in phocid seals should be avoided.
Resumo:
Hydrogen peroxide and chlorine are compared as possible disinfectants for water-cooling circuits. To this purpose, samples taken from the cooling system of a steel making plant were treated (at 25ºC and pH values of 5.5 and 8.5) with varying amounts of the two oxidizing agents (0.0 mg/L, 2.0 mg/L and 6.0 mg/L). The results were evaluated through bacterial counting and measurement of corrosion rates upon AISI1020 carbon steel coupons. Bacterial removal and corrosion effects proved to be similar and satisfactory for both reagents.
Resumo:
BACKGROUND: The Cancer Fast-track Programme's aim was to reduce the time that elapsed between well-founded suspicion of breast, colorectal and lung cancer and the start of initial treatment in Catalonia (Spain). We sought to analyse its implementation and overall effectiveness. METHODS: A quantitative analysis of the programme was performed using data generated by the hospitals on the basis of seven fast-track monitoring indicators for the period 2006-2009. In addition, we conducted a qualitative study, based on 83 semistructured interviews with primary and specialised health professionals and health administrators, to obtain their perception of the programme's implementation. RESULTS: About half of all new patients with breast, lung or colorectal cancer were diagnosed via the fast track, though the cancer detection rate declined across the period. Mean time from detection of suspected cancer in primary care to start of initial treatment was 32 days for breast, 30 for colorectal and 37 for lung cancer (2009). Professionals associated with the implementation of the programme showed that general practitioners faced with suspicion of cancer had changed their conduct with the aim of preventing lags. Furthermore, hospitals were found to have pursued three specific implementation strategies (top-down, consensus-based and participatory), which made for the cohesion and sustainability of the circuits. CONCLUSION: The programme has contributed to speeding up diagnostic assessment and treatment of patients with suspicion of cancer, and to clarifying the patient pathway between primary and specialised care.
Resumo:
The present work describes the determination of polychlorinated biphenyls in 123 umbilical cord serum samples by liquid-liquid extraction method with acid hydrolyze step and analysis by GC-mECD. The analytical method was evaluated with following figures of merit for all PCBs: linearity (>0.997); precision (<12.55%); recoveries (73-119%); limit of detection (0.1 ng mL-1); limit of quantification (0.25-0.5 ng mL-1). The results showed high contamination in the analyzed samples. PCB more frequent was 138 (66.7%), followed by 180 (55.3%) and 52 (51.3%).
Resumo:
FPGA- piirit ovat viime vuosina kehittyneet tehokkaammiksi, mutta samalla niiden hinta on laskenut tasolle, jolloin ne ovat vaihtoehto yhä useampiin sovelluksiin. Kandidaatintyöni aiheena oli suunnitella ja mahdollisesti toteuttaa sulautettu laite, joka laskisi signaalissa esiintyvien pulssien lukumäärää. Sitä käytettäisiin mitattaessa kipinöintiä sähkömoottorin laakeroinnissa. Kipinät havaitaan moottorin ulkopuolelta UHF- antennilla. Antennisignaalista poimittavat pulssit ovat hyvin nopeita, joten digitaaliselta logiikalta vaaditaan myös erityistä nopeutta. Tämän takia laitetta lähdettiin toteuttamaan esimerkiksi mikrokontrollerin sijasta FPGA- piirin avulla. Pulssilaskurin toteutus onnistui suhteellisen vaivattomasti FPGAlla, ja sen toimivuutta käytännössä päästiin testaamaan todellisissa olosuhteissa.
Resumo:
This paper describes an analytical method for analyzing polychlorinated biphenyls in corn samples using solid phase extraction (SPE) followed by determination by GC-MS. All calibration curves proved linear (r> 0.99). Recoveries ranged between 74.1 and 110.6% with relative standard deviation lower than 20% for all compounds. The limits of quantitation for the method were between 0.025 and 0.1 ng g-1. Of the 51 samples analyzed, PCB 180 showed the highest frequency, being detected in more than 39%, followed by PCB 138, detected in more than 33% of samples.
Resumo:
In this work, noise and aromatic hydrocarbons levels of indoor and outdoor karting circuits located in Rio de Janeiro were assessed. The sampling was perfomed using active charcoal cartridges, followed by solvent desorption and analysis by gas chromatography with mass spectrometry detection. This study demonstrated that the karting circuits, venues for entertainment, were a major source of air pollution with the detection of considerable amounts of these compounds (2.0 to 19.7 µg m-3 of benzene; 4.1 to 41.1 µg m-3 of toluene; 2.8 to 36.2 µg m-3 of ethylbenzene; 0.7 to 36.2 µg m-3 of xylenes) and high noise levels.
Resumo:
Polychlorinated biphenyls (PCBs) were widely used between 1940 and 1970 as an insulating fluid for transformers and capacitors. However, they are bioaccumulative and potentially carcinogenic and, according to the 2001 Stockholm Convention, must be eliminated by 2025. In Brazil, they have been gradually eliminated but contaminated equipment remains. The Brazilian official standard for PCBs content in oil analysis is the ABNT NBR 13882 and there is also the IEC 61619 International Standard, both based on GC-ECD quantification. This work identified the inefficiency of these analytical methods and highlights potential failures which generated discrepancies on quantification of these contaminants. It was observed that the IEC 61619 is superior to ABNT NBR 13882 in analytical criteria, but has problems with the inefficiency of the adsorbent material used in pretreatments for removal of oxidation products from oil where these adsorbents adsorbed some PCBs molecules, causing errors in quantification.
Resumo:
Diplomityössä tutkittiin kromatografian, elektroforeesin ja spektrometrian käyttöä ympäristövesianalytiikassa. Kokeellisessa osassa analysoitiin Saimaan Vesi- ja Ympäristötutkimus Oy:n keräämistä kaatopaikka-, jätevesi-, pohjavesi-, vesistö-, uimahalli-, yksityiskaivo-, poreallas- ja suovesinäytteistä epäorgaaniset anionit (F-, Cl-, Br-, NO3-, NO2- SO42-ja PO42-) sekä ionikromatografilla että kapillaarielektroforeesilla. Näytteet on kerätty Saimaan alueen ympäristökunnista. Kapillaarielektroforeesilla analysoitiin lisäksi tiosulfaatti. Liekkiatomiabsorptio-spektrometrilla analysoitiin Cu, Fe, Na ja Al. Natriumia löytyi jokaisesta vesinäytteestä. Pohjavesistä ei löytynyt rautaa eikä alumiinia ja kuparipitoisuudet olivat alle määritysrajan. Vesistövesistä kahdessa näytteessä oli alle määritysrajan olevia rautapitoisuuksia. Muissa näytteissä ei rautaa ollut. Suovesistä kuparia löytyi hyvin pieniä määriä ja yhdestä näytteestä alumiinia alle määritysrajan. Kaatopaikkavesissä kuparipitoisuudet sekä kolmessa näytteessä alumiinipitoisuudet olivat alle määritysrajan. Jätevesistä oletettiin löytyvän suuria määriä typpispesieksiä ja fosforia. Niitä kuitenkin esiintyi isoissa pitoisuuksissa vain suovesinäytteissä. Jätevesinäytteet sisälsivät bromidia, nitraattia ja fluoridia jopa yli 140 mg/l. Kapillaarielektroforeesilla ja ionikromatografilla mitatut anionipitoisuudet korreloivat hyvin toisiaan. Kontaminoituja vesiä löytyi pohja-, kaatopaikka-, jäte- ja vesistövesistä sekä uima-altaan terapiaaltaan vedestä.
Resumo:
The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.