926 resultados para controlled active front end rectifier


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT This dissertation focuses on new technology commercialization, innovation and new business development. Industry-based novel technology may achieve commercialization through its transfer to a large research laboratory acting as a lead user and technical partner, and providing the new technology with complementary assets and meaningful initial use in social practice. The research lab benefits from the new technology and innovation through major performance improvements and cost savings. Such mutually beneficial collaboration between the lab and the firm does not require any additional administrative efforts or funds from the lab, yet requires openness to technologies and partner companies that may not be previously known to the lab- Labs achieve the benefits by applying a proactive procurement model that promotes active pre-tender search of new technologies and pre-tender testing and piloting of these technological options. The collaboration works best when based on the development needs of both parties. This means that first of all the lab has significant engineering activity with well-defined technological needs and second, that the firm has advanced prototype technology yet needs further testing, piloting and the initial market and references to achieve the market breakthrough. The empirical evidence of the dissertation is based on a longitudinal multiple-case study with the European Laboratory for Particle Physics. The key theoretical contribution of this study is that large research labs, including basic research, play an important role in product and business development toward the end, rather than front-end, of the innovation process. This also implies that product-orientation and business-orientation can contribute to basic re-search. The study provides practical managerial and policy guidelines on how to initiate and manage mutually beneficial lab-industry collaboration and proactive procurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Verkon harmonisvirtoja rajoittavien standardien tiukentuessa joudutaan etenkin suuritehoisissa tehoelektronisissa laitteissa siirtymään aktiivisiin transistoriohjattuihin tasasuuntaajiin, jotka korjaavat tehokerrointa ja siten pienentävät verkkoon kytkeytyviä häiriövirtoja. Tässä diplomityössä esitellään yleisimpien kolmivaiheisten tasasuuntaajatopologioiden eroja ja vertaillaan puoliohjatun kolmikytkintopologian ja kuusikytkintopologian suorituskykyä tehokertoimen ja harmonissärön osalta, 16 kilowatin teholuokan taajuusmuuttajassa. Tasasuuntaajille tehtiin skalaariohjaukseen perustuva simulointimalli. Työn tavoitteena esitellään simulointitulokset harmonistason sekä tehokertoimen osalta. Työ liittyy Lappeenrannan teknillisen yliopiston sovelletun elektroniikan laboratorion ja Vacon Oyj:n yhteiseen hankkeeseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli tutkia innovaatioita ja organisaation innovaatiokyvykkyyttä, innovaatiokyvykkyyden taustatekijöitä sekä innovaatioprosessin alkupään (Fuzzy Front End, FFE) sekä siinä tapahtuvan päätöksenteon johtamista. Lisäksi tavoitteena oli suunnitella innovaatioprosessin alkupään toimintamalli selkeyttämään toimintaa prosessin alkupäässä sekä antaa toimenpide-ehdotuksia ja suosituksia. Tutkimuksen teoriaosuus tehtiin kirjallisuustutkimuksena. Tutkimuksen empiirinen osuus suoritettiin case -analyysinä yrityksen henkilöhaastattelu- ja toimintatutkimuksen muodossa. Innovaatioprosessin alkupäähän on tunnistettu toimintamalleja, joilla selkeytetään ja tehostetaan prosessin alkupään vaiheita. Vaiheet ovat mahdollisuuksien tunnistaminen, mahdollisuuksien analysointi, ideointi, ideoiden valitseminen ja konsepti- ja teknologiakehitys. Innovaatioprosessin rinnalla kulkee päätöksenteon prosessi, jonka suhteen tunnistetaan selkeät päätöksentekokohdat ja kriteerit prosessissa etenemiselle. Innovaatio- ja päätöksentekoprosessiin osallistuu eri vaiheissa sekä yrityksen sisäiset, kuten henkilöstö, että ulkoiset, kuten asiakkaat, toimittajat ja verkostokumppanit, sidosryhmät. Lisäksi innovaatioprosessin toimintaan vaikuttavat johdon tuki ja sitoutuminen, osallistujien kyky luovuuteen sekä muut innovaatiokyvykkyyden taustatekijät. Kaikki nämä tekijät tulee huomioida innovaatioprosessin alkupään mallia suunniteltaessa. Tutkimus tehtiin tietoliikennealan yrityksen tarpeisiin. Yrityksessä on käytössä aloitetoimintaa, mutta sen ei koeta tarjoavan riittävästi ideoita yrityksen tuotekehityksen tarpeisiin. Yrityksen henkilöstön innovaatiopotentiaali on suuri, mikä halutaan hyödyntää paremmin suunnittelemalla yrityksen käyttöön soveltuva, innovaatioprosessin alkupään toimintaan ohjaava, vakioitu ja henkilöstöä ja muita yhteistyötahoja, kuten asiakkaita, osallistava toimintamalli. Toimenpide-ehdotuksina ja suosituksina esitetään innovaatioprosessin alkupään johtamisen toimintamallia. Esitetyssä mallissa määritellään vaiheet, menetelmät, päätöksenteko ja vastuut. Toimintamalli esitetään soveltuen yhdistettäväksi yrityksessä käytössä olevaan innovaatioprosessin loppupään, tuotekehitysprojektien läpiviemisen, malliin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current paper presents a study conducted at At-Bristol Science Centre, UK. It is a front-end evaluation for the “Live Science Zone” at At-Bristol, which will be built during the autumn of 2004. It will provide a facility for programmed events and shows, non-programmed investigative activities and the choice of passive or active exploration of current scientific topics. The main aim of the study is to determine characteristics of what kind of techniques to use in the Live Science Zone. The objectives are to explore what has already been done at At-Bristol, and what has been done at other science centres, and to identify successful devices. The secondary aim is mapping what sorts of topics that visitors are actually interested in debating. The methods used in the study are deep qualitative interviews with professionals working within the field of science communication in Europe and North America, and questionnaires answered by visitors to At-Bristol. The results show that there are some gaps between the intentions of the professionals and the opinions of the visitors, in terms of opportunities and willingness for dialogue in science centre activities. The most popular issue was Future and the most popular device was Film.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within this thesis a new double laser pulse pumping scheme for plasma-based, transient collisionally excited soft x-ray lasers (SXRL) was developed, characterized and utilized for applications. SXRL operations from ~50 up to ~200 electron volt were demonstrated applying this concept. As a central technical tool, a special Mach-Zehnder interferometer in the chirped pulse amplification (CPA) laser front-end was developed for the generation of fully controllable double-pulses to optimally pump SXRLs.rnThis Mach-Zehnder device is fully controllable and enables the creation of two CPA pulses of different pulse duration and variable energy balance with an adjustable time delay. Besides the SXRL pumping, the double-pulse configuration was applied to determine the B-integral in the CPA laser system by amplifying short pulse replica in the system, followed by an analysis in the time domain. The measurement of B-integral values in the 0.1 to 1.5 radian range, only limited by the reachable laser parameters, proved to be a promising tool to characterize nonlinear effects in the CPA laser systems.rnContributing to the issue of SXRL pumping, the double-pulse was configured to optimally produce the gain medium of the SXRL amplification. The focusing geometry of the two collinear pulses under the same grazing incidence angle on the target, significantly improved the generation of the active plasma medium. On one hand the effect was induced by the intrinsically guaranteed exact overlap of the two pulses on the target, and on the other hand by the grazing incidence pre-pulse plasma generation, which allows for a SXRL operation at higher electron densities, enabling higher gain in longer wavelength SXRLs and higher efficiency at shorter wavelength SXRLs. The observation of gain enhancement was confirmed by plasma hydrodynamic simulations.rnThe first introduction of double short-pulse single-beam grazing incidence pumping for SXRL pumping below 20 nanometer at the laser facility PHELIX in Darmstadt (Germany), resulted in a reliable operation of a nickel-like palladium SXRL at 14.7 nanometer with a pump energy threshold strongly reduced to less than 500 millijoule. With the adaptation of the concept, namely double-pulse single-beam grazing incidence pumping (DGRIP) and the transfer of this technology to the laser facility LASERIX in Palaiseau (France), improved efficiency and stability of table-top high-repetition soft x-ray lasers in the wavelength region below 20 nanometer was demonstrated. With a total pump laser energy below 1 joule the target, 2 mircojoule of nickel-like molybdenum soft x-ray laser emission at 18.9 nanometer was obtained at 10 hertz repetition rate, proving the attractiveness for high average power operation. An easy and rapid alignment procedure fulfilled the requirements for a sophisticated installation, and the highly stable output satisfied the need for a reliable strong SXRL source. The qualities of the DGRIP scheme were confirmed in an irradiation operation on user samples with over 50.000 shots corresponding to a deposited energy of ~ 50 millijoule.rnThe generation of double-pulses with high energies up to ~120 joule enabled the transfer to shorter wavelength SXRL operation at the laser facility PHELIX. The application of DGRIP proved to be a simple and efficient method for the generation of soft x-ray lasers below 10 nanometer. Nickel-like samarium soft x-ray lasing at 7.3 nanometer was achieved at a low total pump energy threshold of 36 joule, which confirmed the suitability of the applied pumping scheme. A reliable and stable SXRL operation was demonstrated, due to the single-beam pumping geometry despite the large optical apertures. The soft x-ray lasing of nickel-like samarium was an important milestone for the feasibility of applying the pumping scheme also for higher pumping pulse energies, which are necessary to obtain soft x-ray laser wavelengths in the water window. The reduction of the total pump energy below 40 joule for 7.3 nanometer short wavelength lasing now fulfilled the requirement for the installation at the high-repetition rate operation laser facility LASERIX.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa tesi tratta dello sviluppo di un progetto, svolto durante il periodo di tirocinio presso SMS.it, azienda specializzata nel settore della telefonia con sede a Bologna. L'azienda in questione ha commissionato, al sottoscritto ed al mio collega Daniele Sciuto, l'implementazione di un'applicazione cross-platform per smartphone, ed il relativo server. L'azienda ci ha fornito le specifiche del progetto, e ci ha seguiti in tutte le fasi del suo sviluppo. L'applicazione è pensata per offrire agli utenti la possibilità di usufruire di tariffe telefoniche agevolate. I vantaggi sono maggiormente apprezzabili nelle chiamate internazionali. Queste tariffe sono possibili grazie agli accordi fra l'azienda e vari operatori di telefonia. Nella primo capitolo di questo elaborato, viene analizzato cosa ci è stato richiesto di realizzare, le specifiche del progetto dateci dall'azienda e quali sono i vincoli ai quali ci si è dovuti attenere. Nella secondo capitolo, viene descritto nel dettaglio la progettazione delle singole funzionalità dell'applicazione, e i rapporti che ci sono fra il front-end ed il back-end. Successivamente, sono analizzate le tecnologie necessarie per la realizzazione e il loro utilizzo nell'applicazione. Come richiestoci dall'azienda, alcuni dettagli implementativi sono stati omessi, per garantire il rispetto del segreto industriale. Nonostante ciò viene comunque fornita una panoramica completa di ciò che è stato realizzato. In ultima analisi è descritta qualitativamente l'applicazione ottenuta, e come aderisca alle specifiche richieste.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cytoplasmic dynein in filamentous fungi accumulates at microtubule plus-ends near the hyphal tip, which is important for minus-end-directed transport of early endosomes. It was hypothesized that dynein is switched on at the plus-end by cargo association. Here, we show in Aspergillus nidulans that kinesin-1-dependent plus-end localization is not a prerequisite for dynein ATPase activation. First, the Walker A and Walker B mutations in the dynein heavy chain AAA1 domain implicated in blocking different steps of the ATPase cycle cause different effects on dynein localization to microtubules, arguing against the suggestion that ATPase is inactive before arriving at the plus-end. Second, dynein from kinA (kinesin 1) mutant cells has normal ATPase activity despite the absence of dynein plus-end accumulation. In kinA hyphae, dynein localizes along microtubules and does not colocalize with abnormally accumulated early endosomes at the hyphal tip. This is in contrast to the colocalization of dynein and early endosomes in the absence of NUDF/LIS1. However, the Walker B mutation allows dynein to colocalize with the hyphal-tip-accumulated early endosomes in the kinA background. We suggest that the normal ability of dyenin to interact with microtubules as an active minus-end-directed motor demands kinesin-1-mediated plus-end accumulation for effective interactions with early endosomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generation of coherent short-wavelength radiation across a plasma column is dramatically improved under traveling-wave excitation (TWE). The latter is optimized when its propagation is close to the speed of light, which implies small-angle target-irradiation. Yet, short-wavelength lasing needs large irradiation angles in order to increase the optical penetration of the pump into the plasma core. Pulse-front back-tilt is considered to overcome such trade-off. In fact, the TWE speed depends on the pulse-front slope (envelope of amplitude), whereas the optical penetration depth depends on the wave-front slope (envelope of phase). Pulse-front tilt by means of compressor misalignment was found effective only if coupled with a high-magnification front-end imaging/focusing component. It is concluded that speed matching should be accomplished with minimal compressor misalignment and maximal imaging magnification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Future Internet is expected to be composed of a mesh of interoperable Web services accessed from all over the Web. This approach has not yet caught on since global user-service interaction is still an open issue. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. The weakness lies in the abstraction of the underlying service front-end architecture rather than the infrastructure technologies themselves. In our opinion, the best approach is to offer end-to-end composition from user interface to service invocation, as well as an understandable abstraction of both building blocks and a visual composition technique. In this paper we formalize our vision with regard to the next-generation front-end Web technology that will enable integrated access to services, contents and things in the Future Internet. We present a novel reference architecture designed to empower non-technical end users to create and share their own self-service composite applications. A tool implementing this architecture has been developed as part of the European FP7 FAST Project and EzWeb Project, allowing us to validate the rationale behind our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cationic polymerisation of various monomers, including cyclic ethers bearing energetic nitrate ester (-ON02) groups, substituted styrenes and isobutylene has been investigated. The main reaction studied has been the ring-opening polymerisation of 3- (nitratomethyl)-3-methyl oxetane (NIMMO) using the alcohol/BF3.0Et2 binary initiator system. A series of di-, tri- and tetrafunctional telechelic polymers has been synthesised. In order to optimise the system, achieve controlled molecular weight polymers and understand the mechanism of polymerisation the effects of certain parameters on the molecular weight distribution, as determined by Size Exclusion Chromatography, have been examined. This shows the molecular weight achieved depends on a combination of factors including -OH concentration, addition rate of monomer and, most importantly, temperature. A lower temperature and OH concentration tends to produce higher molecular weight, whereas, slower addition rates of monomer, either have no significant effect or produce a lower molecular weight polymer. These factors were used to increase the formation of a cyclic oligomer, by a side reaction, and suggest, that the polymerisation of NIMMO is complicated with endbiting and back biting reactions, along with other transfer/termination processes. These observations appear to fit the model of an active-chain end mechanism. Another cyclic monomer, glycidyl nitrate (GLYN), has been polymerised by the activated monomer mechanism. Various other monomers have been used to end-cap the polymer chains to produce hydroxy ends which are expected to form more stable urethane links, than the glycidyl nitrate ends, when cured with isocyanates. A novel monomer, butadiene oxide dinitrate (BODN), has been prepared and its homopolymerisation and copolymerisation with GL YN studied. In concurrent work the carbocationic polymerisations of isobutylene or substituted styrenes have been studied. Materials with narrow molecular weight distributions have been prepared using the diphenyl phosphate/BCl3 initiator. These systems and monomers are expected to be used in the synthesis of thermoplastic elastomers.