952 resultados para Thermodynamic parameter
Resumo:
Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
Inventory parameter management and focused continuous improvement for repetitive batch manufacturers
Resumo:
What this thesis proposes is a methodology to assist repetitive batch manufacturers in the adoption of certain aspects of the Lean Production principles. The methodology concentrates on the reduction of inventory through the setting of appropriate batch sizes, taking account of the effect of sequence dependent set-ups and the identification and elimination of bottlenecks. It uses a simple Pareto and modified EBQ based analysis technique to allocate items to period order day classes based on a combination of each item's annual usage value and set-up cost. The period order day classes the items are allocated to are determined by the constraints limits in the three measured dimensions, capacity, administration and finance. The methodology overcomes the limitations associated with MRP in the area of sequence dependent set-ups, and provides a simple way of setting planning parameters taking this effect into account by concentrating on the reduction of inventory through the systematic identification and elimination of bottlenecks through set-up reduction processes, so allowing batch sizes to reduce. It aims to help traditional repetitive batch manufacturers in a route to continual improvement by: Highlighting those areas where change would bring the greatest benefits. Modelling the effect of proposed changes. Quantifying the benefits that could be gained through implementing the proposed changes. Simplifying the effort required to perform the modelling process. It concentrates on increasing flexibility through managed inventory reduction through rationally decreasing batch sizes, taking account of sequence dependent set-ups and the identification and elimination of bottlenecks. This was achieved through the development of a software modelling tool, and validated through a case study approach.
Resumo:
We compare the Q parameter obtained from scalar, semi-analytical and full vector models for realistic transmission systems. One set of systems is operated in the linear regime, while another is using solitons at high peak power. We report in detail on the different results obtained for the same system using different models. Polarisation mode dispersion is also taken into account and a novel method to average Q parameters over several independent simulation runs is described. © 2006 Elsevier B.V. All rights reserved.
Resumo:
We compare the Q parameter obtained from the semi-analytical model with scalar and vector models for two realistic transmission systems. First a linear system with a compensated dispersion map and second a soliton transmission system.
Resumo:
The Q parameter scales differently with the noise power for the signal-noise and the noise-noise beating terms in scalar and vector models. Some procedures for including noise in the scalar model largely under-estimate the Q parameter. We propose a simple method for including noise within a scalar model which will allow both the noise-noise dominated limit and the signal-noise dominated limit to be treated consistently. © 2005 Elsevier B.V. All rights reserved.
Resumo:
The Q parameter scales differently with the noise power for the signal-noise and the noise-noise beating terms in scalar and vector models. Some procedures for including noise in the scalar model largely under-estimate the Q parameter.
Resumo:
A fluidized bed process development unit of 0.8 m internal diameter was designed on basis of results obtained from a bench scale laboratory unit. For the scaling up empirical models from the literature were used. The process development unit and peripheral equipment were constructed, assembled and commissioned, and instruments were provided for data acquisition. The fluidization characteristics of the reactor were determined and were compared to the design data. An experimental programme was then carried out and mass and energy balances were made for all the runs. The results showed that the most important independent experimental parameter was the air factor, with an optimum at 0.3. The optimum higher heating value of the gas produced was 6.5 MJ/Nm3, while the thermal efficiency was 70%. Reasonably good agreement was found between the experimental results, theoretical results from a thermodynamic model and data from the literature. It was found that the attainment of steady state was very sensitive to a continuous and constant feedstock flowrate, since the slightest variation in feed flow resulted in fluctuations of the gas quality. On the basis of the results a set of empirical relationships was developed, which constitutes an empirical model for the prediction of the performance of fluidized bed gasifiers. This empirical model was supplemented by a design procedure by which fluidized bed gasifiers can be designed and constructed. The design procedure was then extended to cover feedstock feeding and gas cleaning in a conceptual design of a fluidized bed gasification facility. The conceptual design was finally used to perform an economic evaluation of a proposed gasification facility. The economics of this plant (retrofit application) were favourable.
Resumo:
A study on heat pump thermodynamic characteristics has been made in the laboratory on a specially designed and instrumented air to water heat pump system. The design, using refrigerant R12, was based on the requirement to produce domestic hot water at a temperature of about 50 °C and was assembled in the laboratory. All the experimental data were fed to a microcomputer and stored on disk automatically from appropriate transducers via amplifier and 16 channel analogue to digital converters. The measurements taken were R12 pressures and temperatures, water and R12 mass flow rates, air speed, fan and compressor input powers, water and air inlet and outlet temperatures, wet and dry bulb temperatures. The time interval between the observations could be varied. The results showed, as expected, that the COP was higher at higher air inlet temperatures and at lower hot water output temperatures. The optimum air speed was found to be at a speed when the fan input power was about 4% of the condenser heat output. It was also found that the hot water can be produced at a temperature higher than the appropriate R12 condensing temperature corresponding to condensing pressure. This was achieved by condenser design to take advantage of discharge superheat and by further heating the water using heat recovery from the compressor. Of the input power to the compressor, typically about 85% was transferred to the refrigerant, 50 % by the compression work and 35% due to the heating of the refrigerant by the cylinder wall, and the remaining 15% (of the input power) was rejected to the cooling medium. The evaporator effectiveness was found to be about 75% and sensitive to the air speed. Using the data collected, a steady state computer model was developed. For given input conditions s air inlet temperature, air speed, the degree of suction superheat , water inlet and outlet temperatures; the model is capable of predicting the refrigerant cycle, compressor efficiency, evaporator effectiveness, condenser water flow rate and system Cop.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
A novel technology for simultaneous and independent measurement of dual parameters is proposed and experimented. The length of a single fibre Bragg grating (FBG) is divided into two parts. The temperature variation and another measurand can be measured independently and simultaneously, and the thermal effect can be erased with great ease.
Resumo:
This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.