989 resultados para Limbic Circuits
Resumo:
Un transmissor d’AM (modulació per amplitud), utilitza una de les moltes tècniques de modulació existents avui en dia. És molta la importància que té la modulació de senyals i aquests en són alguns exemples: -Facilita la propagació del senyal per cable o per aire. -Ordena l’espectre, distribuint Canals a les diferents informacions. -Disminueix la dimensió de les antenes. -Optimitza l’ample de banda de cada canal. -Evita interferències entre Canals. -Protegeix la informació de les degradacions per soroll. -Defineix la qualitat de la informació transmesa. L’objectiu principal d’aquest treball, serà realitzar un transmissor d’AM utilitzant components electrònics disponibles al mercat. Això es realitzarà mitjançant diversos procediments de disseny. Es realitzarà un procediment de disseny teòric, tot utilitzant els “datasheets” dels diferents components. Es realitzarà un procediment de disseny mitjançant la simulació, gràcies al qual es podrà provar el disseny del dispositiu i realitzar-ne algunes parts impossibles a reproduir teòricament. I finalment es realitzarà el dispositiu a la pràctica. Entre les conclusions més rellevants obtingudes en aquest treball, voldríem destacar la importància de la simulació per poder dissenyar circuits de radiofreqüència. En aquest treball s’ha demostrat que gràcies a una bona simulació, el primer prototip de dispositiu creat ens ha funcionat a la perfecció. D’altre banda, també comentar la importància d’un disseny adequat d’antena per poder aprofitar al màxim el rendiment del nostre dispositiu. Per concloure, la realització d’un aparell transmissor aporta unes nocions equilibrades d’electrònica i telecomunicacions importants per al disseny de dispositius de comunicació.
Resumo:
Poder mesurar i enregistrar diferents tipus de magnituds com pressió, força, temperatura etc. s’ha convertit en una necessitat per moltes aplicacions actuals. Aquestes magnituds poden tenir procedències molt diverses, tals com l’entorn, o poden ser generades per sistemes mecànics, elèctrics, etc. Per tal de poder adquirir aquestes magnituds, s’utilitzen els sistemes d’adquisició de dades. Aquests sistemes, prenen mostres analògiques del món real, i les transformen en dades digitals que poden ser manipulades per un sistema electrònic. Pràcticament qualsevol magnitud es pot mesurar utilitzant el sensor adient. Una magnitud molt utilitzada en sistemes d’adquisició de dades, és la temperatura. Els sistemes d’adquisició de temperatures estan molt generalitzats, i podem trobar-los com a sistemes, on l’objectiu és mostrar les dades adquirides, o podem trobar-los formant part de sistemes de control, aportant uns inputs necessaris per el seu correcte funcionament, garantir-ne l’estabilitat, seguretat etc. Aquest projecte, promogut per l’empresa Elausa, s’encarregarà d’adquirir, el senyal d’entrada de 2 Termoparells. Aquests mesuraran temperatures de circuits electrònics, que es trobaran dintre la càmera climàtica de Elausa, sotmesos a diferents condicions de temperatura, per tal de rebre l’homologació del circuit. El sistema haurà de poder mostrar les dades adquirides en temps real, i emmagatzemar-les en un PC que estarà ubicat en una oficina, situada a uns 30 m de distància de la sala on es farà el test. El sistema constarà d’un circuit electrònic que adquirirà, i condicionarà el senyal de sortida dels termoparells, per adaptar-lo a la tensió d’entrada d’un convertidor analògic digital, del microcontrolador integrat en aquesta placa. Seguidament aquesta informació, s’enviarà a través d’un mòdul transmissor de radiofreqüència, cap al PC on es visualitzaran les dades adquirides. Els objectius plantejats són els següents: - Dissenyar el circuit electrònic d’adquisició i condicionament del senyal. - Dissenyar, fabricar i muntar el circuit imprès de la placa d’adquisició. - Realitzar el programa de control del microcontrolador. - Realitzar el programa per presentar i desar les dades en un PC. - El sistema ha d’adquirir 2 temperatures, a través de Termoparells amb un rang d’entrada de -40ºC a +240ºC - S’ha de transmetre les dades via R.F. Els resultats del projecte han estat satisfactoris i s’han complert els objectius plantejats.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Hydrogen peroxide and chlorine are compared as possible disinfectants for water-cooling circuits. To this purpose, samples taken from the cooling system of a steel making plant were treated (at 25ºC and pH values of 5.5 and 8.5) with varying amounts of the two oxidizing agents (0.0 mg/L, 2.0 mg/L and 6.0 mg/L). The results were evaluated through bacterial counting and measurement of corrosion rates upon AISI1020 carbon steel coupons. Bacterial removal and corrosion effects proved to be similar and satisfactory for both reagents.
Resumo:
BACKGROUND: The Cancer Fast-track Programme's aim was to reduce the time that elapsed between well-founded suspicion of breast, colorectal and lung cancer and the start of initial treatment in Catalonia (Spain). We sought to analyse its implementation and overall effectiveness. METHODS: A quantitative analysis of the programme was performed using data generated by the hospitals on the basis of seven fast-track monitoring indicators for the period 2006-2009. In addition, we conducted a qualitative study, based on 83 semistructured interviews with primary and specialised health professionals and health administrators, to obtain their perception of the programme's implementation. RESULTS: About half of all new patients with breast, lung or colorectal cancer were diagnosed via the fast track, though the cancer detection rate declined across the period. Mean time from detection of suspected cancer in primary care to start of initial treatment was 32 days for breast, 30 for colorectal and 37 for lung cancer (2009). Professionals associated with the implementation of the programme showed that general practitioners faced with suspicion of cancer had changed their conduct with the aim of preventing lags. Furthermore, hospitals were found to have pursued three specific implementation strategies (top-down, consensus-based and participatory), which made for the cohesion and sustainability of the circuits. CONCLUSION: The programme has contributed to speeding up diagnostic assessment and treatment of patients with suspicion of cancer, and to clarifying the patient pathway between primary and specialised care.
Resumo:
FPGA- piirit ovat viime vuosina kehittyneet tehokkaammiksi, mutta samalla niiden hinta on laskenut tasolle, jolloin ne ovat vaihtoehto yhä useampiin sovelluksiin. Kandidaatintyöni aiheena oli suunnitella ja mahdollisesti toteuttaa sulautettu laite, joka laskisi signaalissa esiintyvien pulssien lukumäärää. Sitä käytettäisiin mitattaessa kipinöintiä sähkömoottorin laakeroinnissa. Kipinät havaitaan moottorin ulkopuolelta UHF- antennilla. Antennisignaalista poimittavat pulssit ovat hyvin nopeita, joten digitaaliselta logiikalta vaaditaan myös erityistä nopeutta. Tämän takia laitetta lähdettiin toteuttamaan esimerkiksi mikrokontrollerin sijasta FPGA- piirin avulla. Pulssilaskurin toteutus onnistui suhteellisen vaivattomasti FPGAlla, ja sen toimivuutta käytännössä päästiin testaamaan todellisissa olosuhteissa.
Resumo:
In this work, noise and aromatic hydrocarbons levels of indoor and outdoor karting circuits located in Rio de Janeiro were assessed. The sampling was perfomed using active charcoal cartridges, followed by solvent desorption and analysis by gas chromatography with mass spectrometry detection. This study demonstrated that the karting circuits, venues for entertainment, were a major source of air pollution with the detection of considerable amounts of these compounds (2.0 to 19.7 µg m-3 of benzene; 4.1 to 41.1 µg m-3 of toluene; 2.8 to 36.2 µg m-3 of ethylbenzene; 0.7 to 36.2 µg m-3 of xylenes) and high noise levels.
Resumo:
The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
Tässä diplomityössä määritellään biopolttoainetta käyttävän voimalaitoksen käytönaikainen tuotannon optimointimenetelmä. Määrittelytyö liittyy MW Powerin MultiPower CHP –voimalaitoskonseptin jatkokehitysprojektiin. Erilaisten olemassa olevien optimointitapojen joukosta valitaan tarkoitukseen sopiva, laitosmalliin ja kustannusfunktioon perustuva menetelmä, jonka tulokset viedään automaatiojärjestelmään PID-säätimien asetusarvojen muodossa. Prosessin mittaustulosten avulla lasketaan laitoksen energia- ja massataseet, joiden tuloksia käytetään seuraavan optimointihetken lähtötietoina. Optimoinnin kohdefunktio on kustannusfunktio, jonka termit ovat voimalaitoksen käytöstä aiheutuvia tuottoja ja kustannuksia. Prosessia optimoidaan säätimille annetut raja-arvot huomioiden niin, että kokonaiskate maksimoituu. Kun laitokselle kertyy käyttöikää ja historiadataa, voidaan prosessin optimointia nopeuttaa hakemalla tilastollisesti historiadatasta nykytilanteen olosuhteita vastaava hetki. Kyseisen historian hetken katetta verrataan kustannusfunktion optimoinnista saatuun katteeseen. Paremman katteen antavan menetelmän laskemat asetusarvot otetaan käyttöön prosessin ohjausta varten. Mikäli kustannusfunktion laskenta eikä historiadatan perusteella tehty haku anna paranevaa katetta, niiden laskemia asetusarvoja ei oteta käyttöön. Sen sijaan optimia aletaan hakea deterministisellä optimointialgoritmilla, joka hakee nykyhetken ympäristöstä paremman katteen antavia säätimien asetusarvoja. Säätöjärjestelmä on mahdollista toteuttaa myös tulevaisuutta ennustavana. Työn käytännön osuudessa voimalaitosmalli luodaan kahden eri mallinnusohjelman avulla, joista toisella kuvataan kattilan ja toisella voimalaitosprosessin toimintaa. Mallinnuksen tuloksena saatuja prosessiarvoja hyödynnetään lähtötietoina käyttökatteen laskennassa. Kate lasketaan kustannusfunktion perusteella. Tuotoista suurimmat liittyvät sähkön ja lämmön myyntiin sekä tuotantotukeen, ja suurimmat kustannukset liittyvät investoinnin takaisinmaksuun ja polttoaineen ostoon. Kustannusfunktiolle tehdään herkkyystarkastelu, jossa seurataan katteen muutosta prosessin teknisiä arvoja muutettaessa. Tuloksia vertaillaan referenssivoimalaitoksella suoritettujen verifiointimittausten tuloksiin, ja havaitaan, että tulokset eivät ole täysin yhteneviä. Erot johtuvat sekä mallinnuksen puutteista että mittausten lyhyehköistä tarkasteluajoista. Automatisoidun optimointijärjestelmän käytännön toteutusta alustetaan määrittelemällä käyttöön otettava optimointitapa, siihen liittyvät säätöpiirit ja tarvittavat lähtötiedot. Projektia tullaan jatkamaan järjestelmän ohjelmoinnilla, testauksella ja virityksellä todellisessa voimalaitosympäristössä ja myöhemmin ennustavan säädön toteuttamisella.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Työssä tutkitaan, kuinka pitkää moottorikaapelia on jännitevälipiiritaajuusmuuttajan kanssa mahdollista käyttää niin, että määritellyt reunaehdot vielä toteutuvat. Tavoitteena on tuottaa tietoa myynnille, markkinoinnille, tuotehallinnalle sekä tuoteylläpidolle siitä, miten taajuus-muuttaja toimii pidemmillä moottorikaapeleilla kuin valmistaja suosittelee. Tutkitut moottori-kaapelipituudet olivat 175…1025 metriä ja tutkitut laitteet nimellislähtövirroiltaan 2,4...25 A. Työssä aihetta käsitellään jänniteheijastusten näkökulmasta. Lisäksi tutkitaan moottorikaapelin pituuden vaikutusta taajuusmuuttajan eri komponenttien lämpenemiseen. Taajuusmuuttajan toiminnallisuutta arvioidaan moottorin suunnanvaihtojen avulla sekä turvallista toimintaa oikosulkutestein. Tutkittujen taajuusmuuttajien kohdalla on mahdollista käyttää taajuusmuuttajavalmistajan suositusta pidempiä moottorikaapeleita. Moottoriliittimien ylijännitteitä aiheuttavat jännite-heijastukset eivät aiheuttaneet raja-arvoja ylittäviä huippujännitteitä tutkituilla laitekokoon-panoilla. Myös lämpötilannousu oli maltillista tai jopa vähäistä taajuusmuuttajasta mitatuilla komponenteilla. Moottorisäätö havaittiin toimintakykyiseksi pidemmilläkin moottorikaape-leilla, tosin moottorin vääntömomentti heikkeni moottorikaapelien pituutta kasvatettaessa. Virranmittaus toimi hyvin myös pitkillä kaapeleilla, tuottaen vikalaukaisun kaikissa tehdyissä oikosulkutilanteissa. Moottorin ja taajuusmuuttajan melutaso nousivat moottorikaapelien pi-tuutta kasvatettaessa, vaikkakin moottorin käynti oli tasaista ja katkotonta Moottorikaapelin pituutta voidaan kasvattaa 325 metriin kaikissa tutkituissa laitteissa ilman, että mikään tutkittu ominaisuus vielä olennaisesti heikkenisi. Vielä 525 metrin moottorikaape-leita on mahdollista käyttää, mutta tällöin vääntömomentin tuotto on jo heikompaa.
Resumo:
An Autonomous Mobile Robot battery driven, with two traction wheels and a steering wheel is being developed. This Robot central control is regulated by an IPC, which controls every function of security, steering, positioning localization and driving. Each traction wheel is operated by a DC motor with independent control system. This system is made up of a chopper, an encoder and a microcomputer. The IPC transmits the velocity values and acceleration ramp references to the PIC microcontrollers. As each traction wheel control is independent, it's possible to obtain different speed values for each wheel. This process facilities the direction and drive changes. Two different strategies for speed velocity control were implemented; one works with PID, and the other with fuzzy logic. There were no changes in circuits and feedback control, except for the PIC microcontroller software. Comparing the two different speed control strategies the results were equivalent. However, in relation to the development and implementation of these strategies, the difficulties were bigger to implement the PID control.
Resumo:
Purpose The aim of this thesis1 is to analyse theoretically how institutionalisation of competitive tendering2, governance and budgetary policies cannot be taken for granted to lead to accountability among institutional actors3. The nature of an institutionalised management accounting policy, its relevance as a source of power in organisational decision making, and in negotiating inter-organisational relationships, are also analysed. Practical motivation The practical motivation of the thesis is to show how practitioners and policy makers can institutionalise changes which improve the power of management accounting and control systems4 as a mechanism of accountability among institutional actors and in negotiating relationships with other organisations. Theoretical motivation and conceptual approach The theoretical motivation of the thesis is to extend the institutional framework of management accounting change proposed by Burns and Scapens (2000) by using the theories of critical realism, communicative action, negotiated order and the framework of circuits of power. The Burns and Scapens framework needs further theorisation to analyse the relationship between the institutionalisation of management accounting and accountability; and the relevance of management accounting information in negotiating in inter-organisational relationships. Methodology and field studies Field research took place in public and not-for-profit health care organisations and a municipality in Finland from 2008 to 2013. Data were gathered by document analysis, interviews, participation in meetings and observations. Findings The findings are explained in four different essays that show that institutionalisation of competitive tendering, governance and budgetary policies cannot be taken for granted to lead to accountability among institutional actors. The ways by which institutional actors think and act can be influenced by other institutional mechanisms, such as inter-organisational circuits of power and intraorganisational governance policies, independent of the institutional change process. The relevance of institutionalised management accounting policies in negotiating relationships between two or more organisations depends on processes and contexts through which institutional actors use management accounting information as a tool of communication, mutual understanding and power. Research limitations / implications The theoretical framework used can be applied validly in other studies. The empirical findings cannot be generalised directly to other organisations than the organisations analysed. Practical implications Competitive tendering and budgetary policies can be institutionalised to shape actions of institutional actors within an organisation. To lead to accountability, practitioners and policy makers should implement governance policies that increase the use of management accounting information in institutional actors’ thinking, actions and responsibility for their actions. To reach a negotiated order between organisations, institutionalised management accounting policies should be used as one of the tools of communication aiming to reach mutual agreement among institutional actors.