37 resultados para Modula-2 (Computer program language)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As systems for computer-aided-design and production of mechanical parts have developed, there has arisen a need for techniques for the comprehensive description of the desired part, including its 3-D shape. The creation and manipulation of shapes is generally known as geometric modelling. It is desirable that links be established between geometric modellers and machining programs. Currently, unbounded APT and some bounded geometry systems are being widely used in manufacturing industry for machining operations such as: milling, drilling, boring and turning, applied mainly to engineering parts. APT systems, however, are presently only linked to wire-frame drafting systems. The combination of a geometric modeller and APT will provide a powerful manufacturing system for industry from the initial design right through part manufacture using NC machines. This thesis describes a recently developed interface (ROMAPT) between a bounded geometry modeller (ROMULUS) and an unbounded NC processor (APT). A new set of theoretical functions and practical algorithms for the computer aided manufacturing of 3D solid geometric model has been investigated. This work has led to the development of a sophisticated computer program, ROMAPT, which provides a new link between CAD (in form of a goemetric modeller ROMULUS) and CAM (in form of the APT NC system). ROMAPT has been used to machine some engineering prototypes successfully both in soft foam material and aluminium. It has been demonstrated above that the theory and algorithms developed by the author for the development of computer aided manufacturing of 3D solid modelling are both valid and applicable. ROMAPT allows the full potential of a solid geometric modeller (ROMULUS) to be further exploited for NC applications without requiring major investment in new NC processor. ROMAPT supports output in APT-AC, APT4 and the CAM-I SSRI NC languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known that distillation tray efficiency depends on the liquid flow pattern, particularly for large diameter trays. Scale·up failures due to liquid channelling have occurred, and it is known that fitting flow control devices to trays sometirr.es improves tray efficiency. Several theoretical models which explain these observations have been published. Further progress in understanding is at present blocked by lack of experimental measurements of the pattern of liquid concentration over the tray. Flow pattern effects are expected to be significant only on commercial size trays of a large diameter and the lack of data is a result of the costs, risks and difficulty of making these measurements on full scale production columns. This work presents a new experiment which simulates distillation by water cooling. and provides a means of testing commercial size trays in the laboratory. Hot water is fed on to the tray and cooled by air forced through the perforations. The analogy between heat and mass transfer shows that the water temperature at any point is analogous to liquid concentration and the enthalpy of the air is analogous to vapour concentration. The effect of the liquid flow pattern on mass transfer is revealed by the temperature field on the tray. The experiment was implemented and evaluated in a column of 1.2 m. dia. The water temperatures were measured by thennocouples interfaced to an electronic computerised data logging system. The "best surface" through the experimental temperature measurements was obtained by the mathematical technique of B. splines, and presented in tenos of lines of constant temperature. The results revealed that in general liquid channelling is more imponant in the bubbly "mixed" regime than in the spray regime. However, it was observed that severe channelling also occurred for intense spray at incipient flood conditions. This is an unexpected result. A computer program was written to calculate point efficiency as well as tray efficiency, and the results were compared with distillation efficiencies for similar loadings. The theoretical model of Porter and Lockett for predicting distillation was modified to predict water cooling and the theoretical predictions were shown to be similar to the experimental temperature profiles. A comparison of the repeatability of the experiments with an errors analysis revealed that accurate tray efficiency measurements require temperature measurements to better than ± 0.1 °c which is achievable with conventional techniques. This was not achieved in this work, and resulted in considerable scatter in the efficiency results. Nevertheless it is concluded that the new experiment is a valuable tool for investigating the effect of the liquid flow pattern on tray mass transfer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis various mathematical methods of studying the transient and dynamic stabiIity of practical power systems are presented. Certain long established methods are reviewed and refinements of some proposed. New methods are presented which remove some of the difficulties encountered in applying the powerful stability theories based on the concepts of Liapunov. Chapter 1 is concerned with numerical solution of the transient stability problem. Following a review and comparison of synchronous machine models the superiority of a particular model from the point of view of combined computing time and accuracy is demonstrated. A digital computer program incorporating all the synchronous machine models discussed, and an induction machine model, is described and results of a practical multi-machine transient stability study are presented. Chapter 2 reviews certain concepts and theorems due to Liapunov. In Chapter 3 transient stability regions of single, two and multi~machine systems are investigated through the use of energy type Liapunov functions. The treatment removes several mathematical difficulties encountered in earlier applications of the method. In Chapter 4 a simple criterion for the steady state stability of a multi-machine system is developed and compared with established criteria and a state space approach. In Chapters 5, 6 and 7 dynamic stability and small signal dynamic response are studied through a state space representation of the system. In Chapter 5 the state space equations are derived for single machine systems. An example is provided in which the dynamic stability limit curves are plotted for various synchronous machine representations. In Chapter 6 the state space approach is extended to multi~machine systems. To draw conclusions concerning dynamic stability or dynamic response the system eigenvalues must be properly interpreted, and a discussion concerning correct interpretation is included. Chapter 7 presents a discussion of the optimisation of power system small sjgnal performance through the use of Liapunov functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A systematic survey of the possible methods of chemical extraction of iron by chloride formation has been presented and supported by a comparable study of :feedstocks, products and markets. The generation and evaluation of alternative processes was carried out by the technique of morphological analysis vihich was exploited by way of a computer program. The final choice was related to technical feasibility and economic viability, particularly capital cost requirements and developments were made in an estimating procedure for hydrometallurgjcal processes which have general applications. The systematic exploration included the compilation of relevant data, and this indicated a need.to investigate precipitative hydrolysis or aqueous ferric chloride. Arising from this study, two novel hydrometallurgical processes for manufacturing iron powder are proposed and experimental work was undertaken in the following .areas to demonstrate feasibility and obtain basic data for design purposes: (1) Precipitative hydrolysis of aqueous ferric chloride. (2) Gaseous chloridation of metallic iron, and oxidation of resultant ferrous chloride. (3) Reduction of gaseous ferric chloride with hydrogen. (4) Aqueous acid leaching of low grade iron ore. (5) Aqueous acid leaching of metallic iron. The experimentation was supported by theoretical analyses dealing with: (1) Thermodynamics of hydrolysis. (2) Kinetics of ore leaching. (3) Kinetics of metallic iron leaching. (4) Crystallisation of ferrous chloride. (5) Oxidation of anhydrous ferrous chloride. (6) Reduction of ferric chloride. Conceptual designs are suggested fbr both the processes mentioned. These draw attention to areas where further work is necessary, which are listed. Economic analyses have been performed which isolate significant cost areas, und indicate total production costs. Comparisons are mode with previous and analogous proposals for the production of iron powder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study of vapour-liquid equilibria is presented together with current developments. The theory of vapour-liquid equilibria is discussed. Both experimental and prediction methods for obtaining vapour-liquid equilibria data are critically reviewed. The development of a new family of equilibrium stills to measure experimental VLE data from sub-atmosphere to 35 bar pressure is described. Existing experimental techniques are reviewed, to highlight the needs for these new apparati and their major attributes. Details are provided of how apparatus may be further improved and how computer control may be implemented. To provide a rigorous test of the apparatus the stills have been commissioned using acetic acid-water mixture at one atmosphere pressure. A Barker-type consistency test computer program, which allows for association in both phases has been applied to the data generated and clearly shows that the stills produce data of a very high quality. Two high quality data sets, for the mixture acetone-chloroform, have been generated at one atmosphere and 64.3oC. These data are used to investigate the ability of the new novel technique, based on molecular parameters, to predict VLE data for highly polar mixtures. Eight, vapour-liquid equilibrium data sets have been produced for the cyclohexane-ethanol mixture at one atmosphere, 2, 4, 6, 8 and 11 bar, 90.9oC and 132.8oC. These data sets have been tested for thermodynamic consistency using a Barker-type fitting package and shown to be of high quality. The data have been used to investigate the dependence of UNIQUAC parameters with temperature. The data have in addition been used to compare directly the performance of the predictive methods - Original UNIFAC, a modified version of UNIFAC, and the new novel technique, based on molecular parameters developed from generalised London's potential (GLP) theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis is divided into four chapters. They are: introduction, experimental, results and discussion about the free ligands and results and discussion about the complexes. The First Chapter, the introductory chapter, is a general introduction to the study of solid state reactions. The Second Chapter is devoted to the materials and experimental methods that have been used for carrying out tile experiments. TIle Third Chapter is concerned with the characterisations of free ligands (Picolinic acid, nicotinic acid, and isonicotinic acid) by using elemental analysis, IR spectra, X-ray diffraction, and mass spectra. Additionally, the thermal behaviour of free ligands in air has been studied by means of thermogravimetry (TG), derivative thermogravimetry (DTG), and differential scanning calorimetry (DSC) measurements. The behaviour of thermal decomposition of the three free ligands was not identical Finally, a computer program has been used for kinetic evaluation of non-isothermal differential scanning calorimetry data according to a composite and single heating rate methods in comparison with the methods due to Ozawa and Kissinger methods. The most probable reaction mechanism for the free ligands was the Avrami-Erofeev equation (A) that described the solid-state nucleation-growth mechanism. The activation parameters of the decomposition reaction for free ligands were calculated and the results of different methods of data analysis were compared and discussed. The Fourth Chapter, the final chapter, deals with the preparation of cobalt, nickel, and copper with mono-pyridine carboxylic acids in aqueous solution. The prepared complexes have been characterised by analyses, IR spectra, X-ray diffraction, magnetic moments, and electronic spectra. The stoichiometry of these compounds was ML2x(H20), (where M = metal ion, L = organic ligand and x = water molecule). The environments of cobalt, nickel, and copper nicotinates and the environments of cobalt and nickel picolinates were octahedral, whereas the environment of copper picolinate [Cu(PA)2] was tetragonal. However, the environments of cobalt, nickel, and copper isonicotinates were polymeric octahedral structures. The morphological changes that occurred throughout the decomposition were followed by SEM observation. TG, DTG, and DSC measurements have studied the thermal behaviour of the prepared complexes in air. During the degradation processes of the hydrated complexes, the crystallisation water molecules were lost in one or two steps. This was also followed by loss of organic ligands and the metal oxides remained. Comparison between the DTG temperatures of the first and second steps of the dehydration suggested that the water of crystallisation was more strongly bonded with anion in Ni(II) complexes than in the complexes of Co(II) and Cu(II). The intermediate products of decomposition were not identified. The most probable reaction mechanism for the prepared complexes was also Avrami-Erofeev equation (A) characteristic of solid-state nucleation-growth mechanism. The tempemture dependence of conductivity using direct current was determined for cobalt, nickel, Cl.nd copper isonicotinates. An activation energy (ΔΕ), the activation energy (ΔΕ ) were calculated.The ternperature and frequency dependence of conductivity, the frequency dependence of dielectric constant, and the dielectric loss for nickel isonicotinate were determined by using altemating current. The value of s paralneter and the value of'density of state [N(Ef)] were calculated. Keyword Thermal decomposition, kinetic, electrical conduclion, pyridine rnono~ carboxylic acid, cOlnplex, transition metal compJex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

N-vinylcarbazole was polymerised using the free radical catalyst (azo-bisisobutyronitrile) and cationic catalysts (boron-trifluoride etherate and aluminium chloride). The polymers produced were characterised by molecular weight measurements and powder x-ray diffraction. The tacticity of the polymer samples was determined using proton and carbon-13 nuclear magnetic resonance spectroscopy. Measurements of their static dielectric permittivity and electro-optical birefringence (Kerr effect) in solution in 1,4-dioxane were carried out over a range of temperatures. The magnitudes of the dipole moments and Kerr constants were found to vary with changes in the tacticity of poly(N-vinylcarbazole). The results of these measurements support the view that the stereostructure of poly(N-vinylcarbazole) is sensitive to the mechanism of polymerisation. These results, together with proton and carbon-13 N.M.R. data, are discussed in terms of the possible conformations of the polymer chains and the relative orientation of the bulky carbazole side groups. The dielectric and molecular Kerr effect studies have also been carried out on complexes formed between 2,4,7-trinitro-9-fluorenone (TNF) and different stereoregular forms of poly(N-vinylcarbazole) in solution in 1,4-dioxane. The differences in the molar Kerr constants between pure (uncomplexed) and complexed poly(N-vinylcarbazole) samples were attributed to changes in optical anisotropy and dipole moments. A molecular modelling computer program Desktop Molecular Modeller was used to examine the 3/1 helical isotactic and 2/1 helical syndiotactic forms of poly(N-vinylcarbazole). These models were used to calculate the pitch distances of helices and the results were interpreted in terms of van der Waal's radii on TNF. This study indicated that the pitch distance in 3/1 isotactic helices was large enough to accommodate the bulky TNF molecules to form sandwich type charge transfer complexes whereas the pitch distance in syndiotactic poly(N-vinylcarbazole) was smaller and would not allow a similar type of complex formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reversed-pahse high-performance liquid chromatographic (HPLC) methods were developed for the assay of indomethacin, its decomposition products, ibuprofen and its (tetrahydro-2-furanyl)methyl-, (tetrahydro-2-(2H)pyranyl)methyl- and cyclohexylmethyl esters. The development and application of these HPLC systems were studied. A number of physico-chemical parameters that affect percutaneous absorption were investigated. The pKa values of indomethacin and ibuprofen were determined using the solubility method. Potentiometric titration and the Taft equation were also used for ibuprofen. The incorporation of ethanol or propylene glycol in the solvent resulted in an improvement in the aqueous solubility of these compounds. The partition coefficients were evaluated in order to establish the affinity of these drugs towards the stratum corneum. The stability of indomethacin and of ibuprofen esters were investigated and the effect of temperature and pH on the decomposition rates were studied. The effect of cetyltrimethylammonium bromide on the alkaline degradation of indomethacin was also followed. In the presence of alcohol, indomethacin alcoholysis was observed and the kinetics of decomposition were subjected to non-linear regression analysis and the rate constants for the various pathways were quantified. The non-isothermal, sufactant non-isoconcentration and non-isopH degradation of indomethacin were investigated. The analysis of the data was undertaken using NONISO, a BASIC computer program. The degradation profiles obtained from both non-iso and iso-kinetic studies show that there is close concordance in the results. The metabolic biotransformation of ibuprofen esters was followed using esterases from hog liver and rat skin homogenates. The results showed that the esters were very labile under these conditions. The presence of propylene glycol affected the rates of enzymic hydrolysis of the ester. The hydrolysis is modelled using an equation involving the dielectric constant of the medium. The percutaneous absorption of indomethacin and of ibuprofen and its esters was followed from solutions using an in vitro excised human skin model. The absorption profiles followed first order kinetics. The diffusion process was related to their solubility and to the human skin/solvent partition coefficient. The percutaneous absorption of two ibuprofen esters from suspensions in 20% propylene glycol-water were also followed through rat skin with only ibuprofen being detected in the receiver phase. The sensitivity of ibuprofen esters to enzymic hydrolysis compared to the chemical hydrolysis may prove valuable in the formulation of topical delivery systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Myopia is a refractive condition and develops because either the optical power of the eye is abnormally great or the eye is abnormally long, the optical consequences being that the focal length of the eye is too short for the physical length of the eye. The increase in axial length has been shown to match closely the dioptric error of the eye, in that a lmm increase in axial length usually generates 2 to 3D of myopia. The most common form of myopia is early-onset myopia (EO M) which occurs between 6 to 14 years of age. The second most common form of myopia is late-onset myopia (LOM) which emerges in late teens or early twenties, at a time when the eye should have ceased growing. The prevalence of LOM is increasing and research has indicated a link with excessive and sustained nearwork. The aim of this thesis was to examine the ocular biometric correlates associated with LOM and EOM development and progression. Biometric data was recorded on SO subjects, aged 16 to 26 years. The group was divided into 26 emmetropic subjects and 24 myopic subjects. Keratometry, corneal topography, ultrasonography, lens shape, central and peripheral refractive error, ocular blood flow and assessment of accommodation were measured on three occasions during an ISmonth to 2-year longitudinal study. Retinal contours were derived using a specially derived computer program. The thesis shows that myopia progression is related to an increase in vitreous chamber depth, a finding which supports previous work. The myopes exhibited hyperopic relative peripheral refractive error (PRE) and the emmetropes exhibited myopic relative PRE. Myopes demonstrated a prolate retinal shape and the retina became more prolate with myopia progression. The results show that a longitudinal, rather than equatorial, increase in the posterior segment is the principal structural correlate of myopia. Retinal shape, relative PRE and the ratio of axial length to corneal curvature have been indicated, in this thesis, as predictive factors for myopia onset and development. Data from this thesis demonstrates that myopia progression in the LOM group is the result of an increase in anterior segment power, owing to an increase in lens thickness, in conjunction with posterior segment elongation. Myopia progression in the EOM group is the product of a long posterior segment, which over-compensates for a weak anterior segment power. The weak anterior segment power in the EOM group is related to a combination of crystalline lens thinning and surface flattening. The results presented in this thesis confirm that posterior segment elongation is the main structural correlate in both EOM and LOM progression. The techniques and computer programs employed in the thesis are reproducible and robust providing a valuable framework for further myopia research and assessment of predictive factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A critical review of previous research revealed that visual attention tests, such as the Useful Field of View (UFOV) test, provided the best means of detecting age-related changes to the visual system that could potentially increase crash risk. However, the question was raised as to whether the UFOV, which was regarded as a static visual attention test, could be improved by inclusion of kinetic targets that more closely represent the driving task. A computer program was written to provide more information about the derivation of UFOV test scores. Although this investigation succeeded in providing new information, some of the commercially protected UFOV test procedures still remain unknown. Two kinetic visual attention tests (DRTS1 and 2), developed at Aston University to investigate inclusion of kinetic targets in visual attention tests, were introduced. The UFOV was found to be more repeatable than either of the kinetic visual attention tests and learning effects or age did not influence these findings. Determinants of static and kinetic visual attention were explored. Increasing target eccentricity led to reduced performance on the UFOV and DRTS1 tests. The DRTS2 was not affected by eccentricity but this may have been due to the style of presentation of its targets. This might also have explained why only the DRTS2 showed laterality effects (i.e. better performance to targets presented on the left hand side of the road). Radial location, explored using the UFOV test, showed that subjects responded best to targets positioned to the horizontal meridian. Distraction had opposite effects on static and kinetic visual attention. While UFOV test performance declined with distraction, DRTS1 performance increased. Previous research had shown that this striking difference was to be expected. Whereas the detection of static targets is attenuated in the presence of distracting stimuli, distracting stimuli that move in a structured flow field enhances the detection of moving targets. Subjects reacted more slowly to kinetic compared to static targets, longitudinal motion compared to angular motion and to increased self-motion. However, the effects of longitudinal motion, angular motion, self-motion and even target eccentricity were caused by target edge speed variations arising because of optic flow field effects. The UFOV test was more able to detect age-related changes to the visual system than were either of the kinetic visual attention tests. The driving samples investigated were too limited to draw firm conclusions. Nevertheless, the results presented showed that neither the DRTS2 nor the UFOV tests were powerful tools for the identification of drivers prone to crashes or poor driving performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements of neutron and gamma dose rates in mixed radiation fields, and gamma dose rates from calibrated gamma sources, were performed using a liquid scintillation counter NE213 with a pulse shape discrimination technique based on the charge comparison method. A computer program was used to analyse the experimental data. The radiation field was obtained from a 241Am-9Be source. There was general agreement between measured and calculated neutron and gamma dose rates in the mixed radiation field, but some disagreement in the measurements of gamma dose rates for gamma sources, due to the dark current of the photomultiplier and the effect of the perturbation of the radiation field by the detector. An optical fibre bundle was used to couple an NE213 scintillator to a photomultiplier, in an attempt to minimise these effects. This produced an improvement in the results for gamma sources. However, the optically coupled detector system could not be used for neutron and gamma dose rate measurements in mixed radiation fields. The pulse shape discrimination system became ineffective as a consequence of the slower time response of the detector system.