904 resultados para Optimisation of methods
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Resumo:
The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.
Resumo:
This thesis examines the mechanism of wear occuring to the video head and their effect on signal reproduction. in particular it examines the wear occuring to manganese-zinc ferrite heads in sliding contact with iron oxide media. A literature survey is presented, which covers magnetic recording technologies, focussing on video recording. Existing work on wear of magnetic heads is also examined, and gaps in the theoretical account of wear mechanisms presented in the literature are identified. Pilot research was carrried out on the signal degradation and wear associated witha number of commercial video tapes, containing a range of head cleaning agents. From this research, the main body of the research was identified. A number of methods of wear measurement were examined for use in this project. Knoop diamond indentation was chosen because experimentation showed it to be capable of measuring wear occuring in situ. This technique was then used to examine the wear associated with different levels of A12O3 and Cr2O3 head cleaning agents. The results of the research indicated that, whilst wear of the video head increases linearly with increasing HCA content, signal degradation does not vary significantly. The most significant differences in wear and signal reproduction were observed between the two HCAs. The signal degradation of heads worn with tape samples containing A12O3 HCA was found to be lower than heads worn with tapes containing Cr2O3 HCA. The results also indicate that the wear to the head is an abrasive process characterised by ploughing of the ferrite surface and chipping of the edges of the head gap. Both phenomena appear to be caused by poor iron oxide and head cleaning particles, which create isolated asperities on the tape surface.
Resumo:
In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise, in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and; hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.
Resumo:
A variety of methods have been reviewed for obtaining parallel or perpendicular alignment in liquid-crystal cells. Some of these methods have been selected and developed and were used in polarised spectroscopy, dielectric and electro-optic studies. Also, novel dielectric and electro-optic cells were constructed for use over a range of temperature. Dielectric response of thin layers of E7 and E8 (eutectic mixture liquid-crystals) have been measured in the frequency range (12 Hz-100 kHz) and over a range of temperature (183-337K). Dielectric spectra were also obtained for supercooled E7 and E8 in the Hz and kHz range. When the measuring electric field was parallel to the nematic director, one loss peak (low-frequency relaxation process) was observed for E7 and for E8, that exhibits a Debye-type behaviour in the supercooled systems. When the measuring electric field was perpendicular to the nematic director, two resolved dielectric processes have been observed. The phase transitions, effective molecular polarisabilities, anisotropy of polarisabilities and order parameters of three liquid crystal homologs (5CB, 6CB, and 7CB), 60CB and three eutectic nematic mixtures E7, E8, and E607 were calculated using optical and density data measured at several temperatures. The order parameters calculated using the different methods of Vuks, Neugebauer, Saupe-Maier, and Palffy-Muhoray are nearly the same for the liquid crystals considered in the present study. Also, the interrelationship between density and refractive index and the molecular structure of these liquid crystals were established. Accurate dielectric and dipole results of a range of liquid-crystal forming molecules at several temperatures have reported. The role of the cyano-end group, biphenyl core, and flexible tail in molecular association, were investigated using the dielectric method for some molecules which have a structural relationship to the nematogens. Analysis of the dielectric data for solution of the liquid-crystals indicated a high molecular association, comparable to that observed in the nematic or isotropic phases. Electro-optic Kerr effect were investigated for some alkyl cyanobiphenyls, their nematic mixtures and the eutectic mixture liquid-crystals E7 and E8 in the isotropic phase and solution. The Kerr constant of these liquid crystals found to be very high at the nematic-isotropic transition temperatures as the molecules are expected to be highly ordered close to phase transition temperatures. Dynamic Kerr effect behaviour and transient molecular reorientation were also observed in thin layers of some alkyl cyanobiphenyls. Dichroic ratio R and order parameters of solutions containing some azo and anthraquinone dyes in the nematic solvent (E7 and E8), were investigated by the measurement of the intensity of the absorption bands in the visible region of parallel aligned samples. The effective factors on the dichroic ratio of the dyes dissolved in the nematic solvents were determined and discussed.
Resumo:
The primary objective of this research was to examine the concepts of the chemical modification of polymer blends by reactive processing using interlinking agents (multi-functional, activated vinyl compounds; trimethylolpropane triacrylates {TRIS} and divinylbenzene {DVD}) to target in-situ interpolymer formation between immiscible polymers in PS/EPDM blends via peroxide-initiated free radical reactions during melt mixing. From a comprehensive survey of previous studies of compatibility enhancement in polystyrene blends, it was recognised that reactive processing offers opportunities for technological success that have not yet been fully realised; learning from this study is expected to assist in the development and application of this potential. In an experimental-scale operation for the simultaneous melt blending and reactive processing of both polymers, involving manual injection of precise reactive agent/free radical initiator mixtures directly into molten polymer within an internal mixer, torque changes were distinct, quantifiable and rationalised by ongoing physical and chemical effects. EPDM content of PS/EPDM blends was the prime determinant of torque increases on addition of TRIS, itself liable to self-polymerisation at high additions, with little indication of PS reaction in initial reactively processed blends with TRIS, though blend compatibility, from visual assessment of morphology by SEM, was nevertheless improved. Suitable operating windows were defined for the optimisation of reactive blending, for use once routes to encourage PS reaction could be identified. The effectiveness of PS modification by reactive processing with interlinking agents was increased by the selection of process conditions to target specific reaction routes, assessed by spectroscopy (FT-IR and NMR) and thermal analysis (DSC) coupled dichloromethane extraction and fractionation of PS. Initiator concentration was crucial in balancing desired PS modification and interlinking agent self-polymerisation, most particularly with TRIS. Pre-addition of initiator to PS was beneficial in the enhancement of TRIS binding to PS and minimisation of modifier polymerisation; believed to arise from direct formation of polystyryl radicals for addition to active unsaturation in TRIS. DVB was found to be a "compatible" modifier for PS, but its efficacy was not quantified. Application of routes for PS reaction in PS/EPDM blends was successful for in-situ formation of interpolymer (shown by sequential solvent extraction combined with FT-IR and DSC analysis); the predominant outcome depending on the degree of reaction of each component, with optimum "between-phase" interpolymer formed under conditions selected for equalisation of differing component reactivities and avoidance of competitive processes. This was achieved for combined addition of TRIS+DVB at optimum initiator concentrations with initiator pre-addition to PS. Improvements in blend compatibility (by tensiles, SEM and thermal analysis) were shown in all cases with significant interpolymer formation, though physical benefits were not; morphology and other reactive effects were also important factors. Interpolymer from specific "between-phase" reaction of blend components and interlinking agent was vital for the realisation of positive performance on compatibilisation by the chemical modification of polymer blends by reactive processing.
Resumo:
Samples of Various industrial or pilot plant spray-dried materials were obtained from manufacturers together with details of drying conditions and feed concentrations. The samples were subjected to qualitative and semi-quantitative examination to identify structural and morphological features. The results were related to measured bulk physical properties and to drying conditions. Single particles were produced in a convective drying process Analogous to spray drying, in which different solids or mixtures of solids were dried from solutions, slurries or pastes as single suspended droplets. The localized chemical and physical structures were analysed and in some cases the retention of volatiles monitored. The results were related to experimental conditions, viz.; air temperature, initial solids concentration and the degree of feed aeration. Three distinct categories of particle morphology were identified, i.e.; crystalline, skin-forming and agglomerate. Each category is evidence of a characteristic drying behaviour which is dependent on initial solids concentration. the degree of feed aeration, and drying temperature. Powder flow ability, particle and bulk density, particle-size, particle friability, and the retention of volatiles bear a direct relationship to morphological structure. Morphologies of multicomponent mixtures were complex, but the respective migration rates of the solutes were dependent on drying temperature. Gas-film heat and SDSS transfer coefficients of single pure liquid droplets were also measured over a temperature range of 50•C to 200•C under forced convection. Balanced transfer rates were obtained attributed to droplet instability or oscillation within the airflow, demonstrated in associated work with single free-flight droplets. The results are of relevance to drier optimisation and to the optimisation of product characteristics, e.g.; particle strength and essential volatiles-retention, in convective drying.
Resumo:
A large number of compounds containing quinonoid or hindered phenol functions were examined for their roles as antifatigue agents. Among the evaluated quinones and phenols expected to have macroalkyl radical scavenging ability, BQ, αTOC, γTOC and GM showed relatively good performance for fatigue resistance (although their performance was slightly less effective than the commercial aromatic amine antioxidants, IPPD and 6PPD). The compounds which were shown to have higher reactivity with alkyl radicals (via calculated reactivity indices) showed better fatigue resistance. This fact supports the suggestion that strong alkyl radical scavengers should be also effective antifatigue agents. Evidence produced based on calculation of reactivity indices suggests that the quinones examined react with alkyl radicals on the meta position of the quinone rings producing phenoxyl radicals. The phenoxyl radicals are expected either to disproportionate, to recombine with a further alkyl radical, or to abstract a hydrogen from another alkyl radical producing an olefine. The regeneration of quinones and formation of the corresponding phenols is expected to occur during the antifatigue activity. The phenol antioxidant, HBA is expected to produce a quinonoid compound and this is also expected to function in a similar way to other quinones. Another phenol, GM, which is also known to scavenge alkyl radicals showed good antifatigue performance. Tocopherols had effective antifatigue activity and are expected to have different antifatigue mechanisms from that of other quinones, hence αTOC was examined for its mechanisms during rubber fatiguing using HPLC analysis. Trimers of αTOC which were produced during vulcanisation are suggested to contribute to the fatigue activity observed. The evidence suggests that the trimers reproduce αTOC and a mechanism was proposed. Although antifatigue agents evaluated showed antifatigue activity, most of them had poor thermoxidative resistance, hence it was necessary to compensate for this by using a combination of antioxidants with the antifatigue agents. Reactive antioxidants which have the potential to graft on the polymer chains during reactive processing were used for this purpose. APMA was the most effective antioxidant among other evaluated reactive antioxidants. Although high ratio of grafting was achieved after optimisation of grafting conditions, it is suggested that this was achieved by long branches of APMA due to large extent of polymerisation. This is expected to cause maldistribution of APMA leading to reducing the effect of CB-D activity (while CB-A activity showed clear advantages for grafting). Further optimisation of grafting conditions is required in order to use APMA more effectively. Moreover, although synergistic effects between APMA and antifatigue agents were expected, none of the evaluated antifatigue agents, BQ, αTOC, γTOC and TMQ, showed significant synergism both in fatigue and thermoxidative resistance. They performed just as additives.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.
Resumo:
This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.
Resumo:
The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.
Resumo:
African Caribbean Owned Businesses (ACOBs) have been postulated as having performance-related problems especially when compared with other ethnic minority groups in Britain. This research investigates if ACOBs may be performing less than similar firms in the population and why this maybe so. Therefore the aspiration behind this study is one of ratifying the existence of performance differentials between ACOBs and White Asian Owned Businesses (WAOBs), by using a triangulation of methods and matched pair analysis. Every ACOB was matched along firm specific characteristics of age, size, legal form and industry (sector), with similar WAOBs. Findings show support for the hypothesis that ACOBs are more likely to perform less than the WAOBs; WAOBs out-performed ACOBs in the objective and subjective assessments. Though we found some differentials between both groups in the entrepreneur’s characteristics and various emphases in strategic orientation in overall business strategy. The most likely drivers of performance differentials were found in firm activities and operations. ACOBs tended to have brands that were not as popular in the mainstream with most of their manufactured goods being seen as ‘exotic’ while those by WAOBs were perceived as ‘traditional’. Moreover, ACOBs had a higher proportion of clients constituting of individuals than business organisations while the WAOBs had a higher proportion consisting of business organisations.
Resumo:
This work describes the fabrication of nanospheres from a range of novel polyhydroxyalkanoates supplied by Monsanto, St Louis, Missouri, USA for the delivery of selected actives of both pharmaceutical and agricultural interest. Initial evaluation of established microsphere and nanosphere fabrication techniques resulted in the adoption and optimisation of a double sonication solvent evaporation method involving the synperonic surfactant F68. Nanospheres could be consistently generated with this method. Studies on the incorporation and release of the surrogate protein Bovine Serum Albumin V demonstrated that BSA could be loaded with between 10-40% w/w BSA without nanosphere destabilisation. BSA release from nanospheres into Hanks Balanced Salts Solution, pH 7.4, could be monitored for up to 28 days at 37°C. The incorporation and release of the Monsanto actives - the insecticide Admire® ({ 1-[(6-chloro-3-pyridinyl)methyIJ-N-nitro-2-imidazolidinimine}) and the plant growth hormone potassium salt Gibberellic acid (GA3K) from physico-chemically characterised polymer nanospheres was monitored for up to 37 days and 28 days respectively, at both 4°C and 23°C. Release data was subsequently fitted to established kinetic models to elaborate the possible mechanisms of release of actives from the nanospheres. The exposure of unloaded nanospheres to a range of physiological media and rural rainwater has been used to investigate the role polymer biodegradation by enzymatic and chemical means might play in the in vivo release of actives and agricultural applications. The potential environmental biodegradation of Monsanto polymers has been investigated using a composting study (International Standard ISO/FDIS 14855) in which the ultimate aerobic biodegradation of the polymers has been monitored by the analysis of evolved carbon dioxide. These studies demonstrated the potential of the polymers for use in the environment, for example as a pesticide delivery system.
Resumo:
Development of accurate and sensitive analytical methods to measure the level of biomarkers, such as 8-oxo-guanine or its corresponding nucleoside, 8-oxo-2’-deoxyguanosine, has become imperative in the study of DNA oxidative damage in vivo. Of the most promising techniques, HPLC-MS/MS, has many attractive advantages. Like any method that employs the MS technique, its accuracy depends on the use of multiply, isotopically-labelled internal standards. This project is aimed at making available such internal standards. The first task was to synthesise the multiply, isotopically-labelled bases (M+4) guanine and (M+4) 8-oxo-guanine. Synthetic routes for both (M+4) guanine and (M+4) 8-oxo-guanine were designed and validated using the unlabelled compounds. The reaction conditions were also optimized during the “dry runs”. The amination of the 4-hydroxy-2,6-dichloropyrimidine, appeared to be very sensitive to the purity of the commercial [15]N benzylamine reagent. Having failed, after several attempts, to obtain the pure reagent from commercial suppliers, [15]N benzylamine was successfully synthesised in our laboratory and used in the first synthesis of (M+4) guanine. Although (M+4) bases can be, and indeed have been used as internal standards in the quantitative analysis of oxidative damage, they can not account for the errors that may occur during the early sample preparation stages. Therefore, internal standards in the form of nucleosides and DNA oligomers are more desirable. After evaluating a number of methods, an enzymatic transglycolization technique was adopted for the transfer of the labelled bases to give their corresponding nucleosides. Both (M+4) 2-deoxyguanosine and (M+4) 8-oxo-2’-deoxyguanosine can be purified on micro scale by HPLC. The challenge came from the purification of larger scale (>50 mg) synthesis of nucleosides. A gel filtration method was successfully developed, which resulted in excellent separation of (M+4) 2’-deoxyguanosine from the incubation mixture. The (M+4) 2’-deoxyguanosine was then fully protected in three steps and successfully incorporated, by solid supported synthesis, into a DNA oligomer containing 18 residues. Thus, synthesis of 8-oxo-deoxyguanosine on a bigger scale for its future incorporation into DNA oligomers is now a possibility resulting from this thesis work. We believe that these internal standards can be used to develop procedures that can make the measurement of oxidative DNA damage more accurate and sensitive.