68 resultados para Dynamic Modelling And Simulation
Resumo:
We consider an inversion-based neurocontroller for solving control problems of uncertain nonlinear systems. Classical approaches do not use uncertainty information in the neural network models. In this paper we show how we can exploit knowledge of this uncertainty to our advantage by developing a novel robust inverse control method. Simulations on a nonlinear uncertain second order system illustrate the approach.
Resumo:
Purpose: Meibomian-derived lipid secretions are well characterised but their subsequent fate in the ocular environment is less well understood. Phospholipids are thought to facilitate the interface between aqueous and lipid layers of the tear film and to be involved in ocular lubrication processes. We have extended our previous studies on phospholipid levels in the tear film to encompass the fate of polar and non-polar lipids in progressive accumulation and aging processes on both conventional and silicone-modified hydrogel lenses. This is an important aspect of the developing understanding of the role of lipids in the clinical performance of silicone hydrogels. Method: Several techniques were used to identify lipids in the tear film. Mass-spectrometric methods included Agilent 1100-based liquid chromatography coupled to mass spectrometry (LCMS) and Perkin Elmer gas chromatography mass spectrometry (GCMS). Thin layer chromatography (TLC) was used for separation of lipids on the basis of increasing solvent polarity. Routine assay of lipid extractions from patient-worn lenses was carried out using a Hewlett Packard 1090 liquid chromatograph coupled to both uv and Agilent 1100 fluorescence detection. A range of histological together with optical, and electron microscope techniques was used in deposit analysis. Results: Progressive lipid uptake was assessed in various ways, including: composition changes with wear time, differential lipid penetrate into the lens matrix and, particularly, the extent to which lipids become unextractable as a function of wear time. Solvent-based separation and HPLC gave consistent results indicating that the polarity of lipid classes decreased as follows: phospholipids/fatty acids > triglycerides > cholesterol/cholesteryl esters. Tear lipids were found to show autofluorescence—which underpinned the value of fluorescence microscopy and fluorescence detection coupled with HPLC separation. The most fluorescent lipids were found to be cholesteryl esters; histological techniques coupled with fluorescence microscopy indicated that white spots (’’jelly bumps’’) formed on silicone hydrogel lenses contain a high proportion of cholesteryl esters. Lipid profiles averaged for 30 symptomatic and 30 asymptomatic contact lens wearers were compiled. Peak classes were split into: cholesterol (C), cholesteryl esters (CE), glycerides (G), polar fatty acids/phospholipids (PL). The lipid ratio for ymptomatic/symptomatic was 0.6 ± 0.1 for all classes except one—the cholesterol ratio was 0.2 ± 0.05. Significantly the PL ratio was no different from that of any other class except cholesterol. Chromatography indicated that: lipid polarity decreased with depth of penetration and that lipid extractability decreased with wear time. Conclusions: Meibomian lipid composition differs from that in the tear film and on worn lenses. Although the same broad lipid classes were obtained by extraction from all lenses and all patients studied, quantities vary with wear and material. Lipid extractability diminishes with wear time regardless of the use of cleaning regimes. Dry eye symptoms in contact lens wear are frequently linked to lipid layer behaviour but seem to relate more to total lipid than to specific composition. Understanding the detail of lipid related processes is an important element of improving the clinical performance of materials and care solutions.
Resumo:
The potential for the use of DEA and simulation in a mutually supporting role in guiding operating units to improved performance is presented. An analysis following a three-stage process is suggested. Stage one involves obtaining the data for the DEA analysis. This can be sourced from historical data, simulated data or a combination of the two. Stage two involves the DEA analysis that identifies benchmark operating units. In the third stage simulation can now be used in order to offer practical guidance to operating units towards improved performance. This can be achieved by the use of sensitivity analysis of the benchmark unit using a simulation model to offer direct support as to the feasibility and efficiency of any variations in operating practices to be tested. Alternatively, the simulation can be used as a mechanism to transmit the practices of the benchmark unit to weaker performing units by building a simulation model of the weaker unit to the process design of the benchmark unit. The model can then compare performance of the current and benchmark process designs. Quantifying improvement in this way provides a useful driver to any process change initiative that is required to bring the performance of weaker units up to the best in class. © 2005 Operational Research Society Ltd. All rights reserved.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
The purpose of the work described here has been to seek methods of narrowing the present gap between currently realised heat pump performance and the theoretical limit. The single most important pre-requisite to this objective is the identification and quantitative assessment of the various non-idealities and degradative phenomena responsible for the present shortfall. The use of availability analysis has been introduced as a diagnostic tool, and applied to a few very simple, highly idealised Rankine cycle optimisation problems. From this work, it has been demonstrated that the scope for improvement through optimisation is small in comparison with the extensive potential for improvement by reducing the compressor's losses. A fully instrumented heat pump was assembled and extensively tested. This furnished performance data, and led to an improved understanding of the systems behaviour. From a very simple analysis of the resulting compressor performance data, confirmation of the compressor's low efficiency was obtained. In addition, in order to obtain experimental data concerning specific details of the heat pump's operation, several novel experiments were performed. The experimental work was concluded with a set of tests which attempted to obtain definitive performance data for a small set of discrete operating conditions. These tests included an investigation of the effect of two compressor modifications. The resulting performance data was analysed by a sophisticated calculation which used that measurements to quantify each dagradative phenomenon occurring in that compressor, and so indicate where the greatest potential for improvement lies. Finally, in the light of everything that was learnt, specific technical suggestions have been made, to reduce the losses associated with both the refrigerant circuit and the compressor.
Resumo:
Experimental investigations and computer modelling studies have been made on the refrigerant-water counterflow condenser section of a small air to water heat pump. The main object of the investigation was a comparative study between the computer modelling predictions and the experimental observations for a range of operating conditions but other characteristics of a counterflow heat exchanger are also discussed. The counterflow condenser consisted of 15 metres of a thermally coupled pair of copper pipes, one containing the R12 working fluid and the other water flowing in the opposite direction. This condenser was mounted horizontally and folded into 0.5 metre straight sections. Thermocouples were inserted in both pipes at one metre intervals and transducers for pressure and flow measurement were also included. Data acquisition, storage and analysis was carried out by a micro-computer suitably interfaced with the transducers and thermocouples. Many sets of readings were taken under a variety of conditions, with air temperature ranging from 18 to 26 degrees Celsius, water inlet from 13.5 to 21.7 degrees, R12 inlet temperature from 61.2 to 81.7 degrees and water mass flow rate from 6.7 to 32.9 grammes per second. A Fortran computer model of the condenser (originally prepared by Carrington[1]) has been modified to match the information available from experimental work. This program uses iterative segmental integration over the desuperheating, mixed phase and subcooled regions for the R12 working fluid, the water always being in the liquid phase. Methods of estimating the inlet and exit fluid conditions from the available experimental data have been developed for application to the model. Temperature profiles and other parameters have been predicted and compared with experimental values for the condenser for a range of evaporator conditions and have shown that the model gives a satisfactory prediction of the physical behaviour of a simple counterflow heat exchanger in both single phase and two phase regions.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.