849 resultados para Integración of methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

      This thesis examines the mechanism of wear occuring to the video head and their effect on signal reproduction. in particular it examines the wear occuring to manganese-zinc ferrite heads in sliding contact with iron oxide media.       A literature survey is presented, which covers magnetic recording technologies, focussing on video recording. Existing work on wear of magnetic heads is also examined, and gaps in the theoretical account of wear mechanisms presented in the literature are identified.       Pilot research was carrried out on the signal degradation and wear associated witha number of commercial video tapes, containing a range of head cleaning agents. From this research, the main body of the research was identified. A number of methods of wear measurement were examined for use in this project. Knoop diamond indentation was chosen because experimentation showed it to be capable of measuring wear occuring in situ. This technique was then used to examine the wear associated with different levels of A12O3 and Cr2O3 head cleaning agents.      The results of the research indicated that, whilst wear of the video head increases linearly with increasing HCA content, signal degradation does not vary significantly. The most significant differences in wear and signal reproduction were observed between the two HCAs. The signal degradation of heads worn with tape samples containing A12O3 HCA was found to be lower than heads worn with tapes containing Cr2O3 HCA.      The results also indicate that the wear to the head is an abrasive process characterised by ploughing of the ferrite surface and chipping of the edges of the head gap. Both phenomena appear to be caused by poor iron oxide and head cleaning particles, which create isolated asperities on the tape surface.   

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A variety of methods have been reviewed for obtaining parallel or perpendicular alignment in liquid-crystal cells. Some of these methods have been selected and developed and were used in polarised spectroscopy, dielectric and electro-optic studies. Also, novel dielectric and electro-optic cells were constructed for use over a range of temperature. Dielectric response of thin layers of E7 and E8 (eutectic mixture liquid-crystals) have been measured in the frequency range (12 Hz-100 kHz) and over a range of temperature (183-337K). Dielectric spectra were also obtained for supercooled E7 and E8 in the Hz and kHz range. When the measuring electric field was parallel to the nematic director, one loss peak (low-frequency relaxation process) was observed for E7 and for E8, that exhibits a Debye-type behaviour in the supercooled systems. When the measuring electric field was perpendicular to the nematic director, two resolved dielectric processes have been observed. The phase transitions, effective molecular polarisabilities, anisotropy of polarisabilities and order parameters of three liquid crystal homologs (5CB, 6CB, and 7CB), 60CB and three eutectic nematic mixtures E7, E8, and E607 were calculated using optical and density data measured at several temperatures. The order parameters calculated using the different methods of Vuks, Neugebauer, Saupe-Maier, and Palffy-Muhoray are nearly the same for the liquid crystals considered in the present study. Also, the interrelationship between density and refractive index and the molecular structure of these liquid crystals were established. Accurate dielectric and dipole results of a range of liquid-crystal forming molecules at several temperatures have reported. The role of the cyano-end group, biphenyl core, and flexible tail in molecular association, were investigated using the dielectric method for some molecules which have a structural relationship to the nematogens. Analysis of the dielectric data for solution of the liquid-crystals indicated a high molecular association, comparable to that observed in the nematic or isotropic phases. Electro-optic Kerr effect were investigated for some alkyl cyanobiphenyls, their nematic mixtures and the eutectic mixture liquid-crystals E7 and E8 in the isotropic phase and solution. The Kerr constant of these liquid crystals found to be very high at the nematic-isotropic transition temperatures as the molecules are expected to be highly ordered close to phase transition temperatures. Dynamic Kerr effect behaviour and transient molecular reorientation were also observed in thin layers of some alkyl cyanobiphenyls. Dichroic ratio R and order parameters of solutions containing some azo and anthraquinone dyes in the nematic solvent (E7 and E8), were investigated by the measurement of the intensity of the absorption bands in the visible region of parallel aligned samples. The effective factors on the dichroic ratio of the dyes dissolved in the nematic solvents were determined and discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

African Caribbean Owned Businesses (ACOBs) have been postulated as having performance-related problems especially when compared with other ethnic minority groups in Britain. This research investigates if ACOBs may be performing less than similar firms in the population and why this maybe so. Therefore the aspiration behind this study is one of ratifying the existence of performance differentials between ACOBs and White Asian Owned Businesses (WAOBs), by using a triangulation of methods and matched pair analysis. Every ACOB was matched along firm specific characteristics of age, size, legal form and industry (sector), with similar WAOBs. Findings show support for the hypothesis that ACOBs are more likely to perform less than the WAOBs; WAOBs out-performed ACOBs in the objective and subjective assessments. Though we found some differentials between both groups in the entrepreneur’s characteristics and various emphases in strategic orientation in overall business strategy. The most likely drivers of performance differentials were found in firm activities and operations. ACOBs tended to have brands that were not as popular in the mainstream with most of their manufactured goods being seen as ‘exotic’ while those by WAOBs were perceived as ‘traditional’. Moreover, ACOBs had a higher proportion of clients constituting of individuals than business organisations while the WAOBs had a higher proportion consisting of business organisations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Development of accurate and sensitive analytical methods to measure the level of biomarkers, such as 8-oxo-guanine or its corresponding nucleoside, 8-oxo-2’-deoxyguanosine, has become imperative in the study of DNA oxidative damage in vivo. Of the most promising techniques, HPLC-MS/MS, has many attractive advantages. Like any method that employs the MS technique, its accuracy depends on the use of multiply, isotopically-labelled internal standards. This project is aimed at making available such internal standards. The first task was to synthesise the multiply, isotopically-labelled bases (M+4) guanine and (M+4) 8-oxo-guanine. Synthetic routes for both (M+4) guanine and (M+4) 8-oxo-guanine were designed and validated using the unlabelled compounds. The reaction conditions were also optimized during the “dry runs”. The amination of the 4-hydroxy-2,6-dichloropyrimidine, appeared to be very sensitive to the purity of the commercial [15]N benzylamine reagent. Having failed, after several attempts, to obtain the pure reagent from commercial suppliers, [15]N benzylamine was successfully synthesised in our laboratory and used in the first synthesis of (M+4) guanine. Although (M+4) bases can be, and indeed have been used as internal standards in the quantitative analysis of oxidative damage, they can not account for the errors that may occur during the early sample preparation stages. Therefore, internal standards in the form of nucleosides and DNA oligomers are more desirable. After evaluating a number of methods, an enzymatic transglycolization technique was adopted for the transfer of the labelled bases to give their corresponding nucleosides. Both (M+4) 2-deoxyguanosine and (M+4) 8-oxo-2’-deoxyguanosine can be purified on micro scale by HPLC. The challenge came from the purification of larger scale (>50 mg) synthesis of nucleosides. A gel filtration method was successfully developed, which resulted in excellent separation of (M+4) 2’-deoxyguanosine from the incubation mixture. The (M+4) 2’-deoxyguanosine was then fully protected in three steps and successfully incorporated, by solid supported synthesis, into a DNA oligomer containing 18 residues. Thus, synthesis of 8-oxo-deoxyguanosine on a bigger scale for its future incorporation into DNA oligomers is now a possibility resulting from this thesis work. We believe that these internal standards can be used to develop procedures that can make the measurement of oxidative DNA damage more accurate and sensitive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OObjectives: We explored the perceptions, views and experiences of diabetes education in people with type 2 diabetes who were participating in a UK randomized controlled trial of methods of education. The intervention arm of the trial was based on DESMOND, a structured programme of group education sessions aimed at enabling self-management of diabetes, while the standard arm was usual care from general practices. Methods: Individual semi-structured interviews were conducted with 36 adult patients, of whom 19 had attended DESMOND education sessions and 17 had been randomized to receive usual care. Data analysis was based on the constant comparative method. Results: Four principal orientations towards diabetes and its management were identified: `resisters', `identity resisters, consequence accepters', `identity accepters, consequence resisters' and `accepters'. Participants offered varying accounts of the degree of personal responsibility that needed to be assumed in response to the diagnosis. Preferences for different styles of education were also expressed, with many reporting that they enjoyed and benefited from group education, although some reported ambivalence or disappointment with their experiences of education. It was difficult to identify striking thematic differences between accounts of people on different arms of the trial, although there was some very tentative evidence that those who attended DESMOND were more accepting of a changed identity and its implications for their management of diabetes. Discussion: No one single approach to education is likely to suit all people newly diagnosed with diabetes, although structured group education may suit many. This paper identifies varying orientations and preferences of people with diabetes towards forms of both education and self-management, which should be taken into account when planning approaches to education.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The occurrence of spalling is a major factor in determining the fire resistance of concrete constructions. The apparently random occurrence of spalling has limited the development and application of fire resistance modelling for concrete structures. This Thesis describes an experimental investigation into the spalling of concrete on exposure to elevated temperatures. It has been shown that spalling may be categorised into four distinct types, aggregate spalling, corner spalling, surface spalling and explosive spalling. Aggregate spalling has been found to be a form of shear failure of aggregates local to the heated surface. The susceptibility of any particular concrete to aggregate spalling can be quantified from parameters which include the coefficients of thermal expansion of both the aggregate and the surrounding mortar, the size and thermal diffusivity of the aggregate and the rate of heating. Corner spalling, which is particularly significant for the fire resistance of concrete columns, is a result of concrete losing its tensile strength at elevated temperatures. Surface spalling is the result of excessive pore pressures within heated concrete. An empirical model has been developed to allow quantification of the pore pressures and a material failure model proposed. The dominant parameters are rate of heating, pore saturation and concrete permeability. Surface spalling may be alleviated by limiting pore pressure development and a number of methods to this end have been evaluated. Explosive spalling involves the catastrophic failure of a concrete element and may be caused by either of two distinct mechanisms. In the first instance, excessive pore pressures can cause explosive spalling, although the effect is limited principally to unloaded or relatively small specimens. A second cause of explosive spalling is where the superimposition of thermally induced stresses on applied load stresses exceed the concrete's strength.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drying is an important unit operation in process industry. Results have suggested that the energy used for drying has increased from 12% in 1978 to 18% of the total energy used in 1990. A literature survey of previous studies regarding overall drying energy consumption has demonstrated that there is little continuity of methods and energy trends could not be established. In the ceramics, timber and paper industrial sectors specific energy consumption and energy trends have been investigated by auditing drying equipment. Ceramic products examined have included tableware, tiles, sanitaryware, electrical ceramics, plasterboard, refractories, bricks and abrasives. Data from industry has shown that drying energy has not varied significantly in the ceramics sector over the last decade, representing about 31% of the total energy consumed. Information from the timber industry has established that radical changes have occurred over the last 20 years, both in terms of equipment and energy utilisation. The energy efficiency of hardwood drying has improved by 15% since the 1970s, although no significant savings have been realised for softwood. A survey estimating the energy efficiency and operating characteristics of 192 paper dryer sections has been conducted. Drying energy was found to increase to nearly 60% of the total energy used in the early 1980s, but has fallen over the last decade, representing 23% of the total in 1993. These results have demonstrated that effective energy saving measures, such as improved pressing and heat recovery, have been successfully implemented since the 1970s. Artificial neural networks have successfully been applied to model process characteristics of microwave and convective drying of paper coated gypsum cove. Parameters modelled have included product moisture loss, core gypsum temperature and quality factors relating to paper burning and bubbling defects. Evaluation of thermal and dielectric properties have highlighted gypsum's heat sensitive characteristics in convective and electromagnetic regimes. Modelling experimental data has shown that the networks were capable of simulating drying process characteristics to a high degree of accuracy. Product weight and temperature were predicted to within 0.5% and 5C of the target data respectively. Furthermore, it was demonstrated that the underlying properties of the data could be predicted through a high level of input noise.