69 resultados para WELL SYSTEMS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tribology of linear tape storage system including Linear Tape Open (LTO) and Travan5 was investigated by combining X-ray Photoelectron Spectroscopy (XPS), Auger Electron Spectroscopy (AES), Optical Microscopy and Atomic Force Microscopy (AFM) technologies. The purpose of this study was to understand the tribology mechanism of linear tape systems then projected recording densities may be achieved in future systems. Water vapour pressure or Normalized Water Content (NWC) rather than the Relative Humidity (RH) values (as are used almost universally in this field) determined the extent of PTR and stain (if produced) in linear heads. Approximately linear dependencies were found for saturated PTR increasing with normalized water content increasing over the range studied using the same tape. Fe Stain (if produced) preferentially formed on the head surfaces at the lower water contents. The stain formation mechanism had been identified. Adhesive bond formation is a chemical process that is governed by temperature. Thus the higher the contact pressure, the higher the contact temperature in the interface of head and tape, was produced higher the probability of adhesive bond formation and the greater the amount of transferred material (stain). Water molecules at the interface saturate the surface bonds and makes adhesive junctions less likely. Tape polymeric binder formulation also has a significant role in stain formation, with the latest generation binders producing less transfer of material. This is almost certainly due to higher cohesive bonds within the body of the magnetic layer. TiC in the two-phase ceramic tape-bearing surface (AlTiC) was found to oxidise to form TiO2.The oxidation rate of TiC increased with water content increasing. The oxide was less dense than the underlying carbide; hence the interface between TiO2 oxide and TiC was stressed. Removals of the oxide phase results in the formation of three-body abrasive particles that were swept across the tape head, and gave rise to three-body abrasive wear, particularly in the pole regions. Hence, PTR and subsequent which signal loss and error growth. The lower contact pressure of the LTO system comparing with the Travan5 system ensures that fewer and smaller three-body abrasive particles were swept across the poles and insulator regions. Hence, lower contact pressure, as well as reducing stain in the same time significantly reduces PTR in the LTO system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to present numerical investigations of the polarisation mode dispersion (PMD) effect. Outstanding issues on the side of the numerical implementations of PMD are resolved and the proposed methods are further optimized for computational efficiency and physical accuracy. Methods for the mitigation of the PMD effect are taken into account and simulations of transmission system with added PMD are presented. The basic outline of the work focusing on PMD can be divided as follows. At first the widely-used coarse-step method for simulating the PMD phenomenon as well as a method derived from the Manakov-PMD equation are implemented and investigated separately through the distribution of a state of polarisation on the Poincaré sphere, and the evolution of the dispersion of a signal. Next these two methods are statistically examined and compared to well-known analytical models of the probability distribution function (PDF) and the autocorrelation function (ACF) of the PMD phenomenon. Important optimisations are achieved, for each of the aforementioned implementations in the computational level. In addition the ACF of the coarse-step method is considered separately, based on the result which indicates that the numerically produced ACF, exaggerates the value of the correlation between different frequencies. Moreover the mitigation of the PMD phenomenon is considered, in the form of numerically implementing Low-PMD spun fibres. Finally, all the above are combined in simulations that demonstrate the impact of the PMD on the quality factor (Q=factor) of different transmission systems. For this a numerical solver based on the coupled nonlinear Schrödinger equation is created which is otherwise tested against the most important transmission impairments in the early chapters of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flow control in Computer Communication systems is generally a multi-layered structure, consisting of several mechanisms operating independently at different levels. Evaluation of the performance of networks in which different flow control mechanisms act simultaneously is an important area of research, and is examined in depth in this thesis. This thesis presents the modelling of a finite resource computer communication network equipped with three levels of flow control, based on closed queueing network theory. The flow control mechanisms considered are: end-to-end control of virtual circuits, network access control of external messages at the entry nodes and the hop level control between nodes. The model is solved by a heuristic technique, based on an equivalent reduced network and the heuristic extensions to the mean value analysis algorithm. The method has significant computational advantages, and overcomes the limitations of the exact methods. It can be used to solve large network models with finite buffers and many virtual circuits. The model and its heuristic solution are validated by simulation. The interaction between the three levels of flow control are investigated. A queueing model is developed for the admission delay on virtual circuits with end-to-end control, in which messages arrive from independent Poisson sources. The selection of optimum window limit is considered. Several advanced network access schemes are postulated to improve the network performance as well as that of selected traffic streams, and numerical results are presented. A model for the dynamic control of input traffic is developed. Based on Markov decision theory, an optimal control policy is formulated. Numerical results are given and throughput-delay performance is shown to be better with dynamic control than with static control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boyd's SBS model which includes distributed thermal acoustic noise (DTAN) has been enhanced to enable the Stokes-spontaneous density depletion noise (SSDDN) component of the transmitted optical field to be simulated, probably for the first time, as well as the full transmitted field. SSDDN would not be generated from previous SBS models in which a Stokes seed replaces DTAN. SSDDN becomes the dominant form of transmitted SBS noise as model fibre length (MFL) is increased but its optical power spectrum remains independent of MFL. Simulations of the full transmitted field and SSDDN for different MFLs allow prediction of the optical power spectrum, or system performance parameters which depend on this, for typical communication link lengths which are too long for direct simulation. The SBS model has also been innovatively improved by allowing the Brillouin Shift Frequency (BS) to vary over the model fibre length, for the nonuniform fibre model (NFM) mode, or to remain constant, for the uniform fibre model (UFM) mode. The assumption of a Gaussian probability density function (pdf) for the BSF in the NFM has been confirmed by means of an analysis of reported Brillouin amplified power spectral measurements for the simple case of a nominally step-index single-mode pure silica core fibre. The BSF pdf could be modified to match the Brillouin gain spectra of other fibre types if required. For both models, simulated backscattered and output powers as functions of input power agree well with those from a reported experiment for fitting Brillouin gain coefficients close to theoretical. The NFM and UFM Brillouin gain spectra are then very similar from half to full maximum but diverge at lower values. Consequently, NFM and UFM transmitted SBS noise powers inferred for long MFLs differ by 1-2 dB over the input power range of 0.15 dBm. This difference could be significant for AM-VSB CATV links at some channel frequencies. The modelled characteristic of Carrier-to-Noise Ratio (CNR) as a function of input power for a single intensity modulated subcarrier is in good agreement with the characteristic reported for an experiment when either the UFM or NFM is used. The difference between the two modelled characteristics would have been more noticeable for a higher fibre length or a lower subcarrier frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liposome systems are well reported for their activity as vaccine adjuvants; however novel lipid-based microbubbles have also been reported to enhance the targeting of antigens into dendritic cells (DCs) in cancer immunotherapy (Suzuki et al 2009). This research initially focused on the formulation of gas-filled lipid coated microbubbles and their potential activation of macrophages using in vitro models. Further studies in the thesis concentrated on aqueous-filled liposomes as vaccine delivery systems. Initial work involved formulating and characterising four different methods of producing lipid-coated microbubbles (sometimes referred to as gas-filled liposomes), by homogenisation, sonication, a gas-releasing chemical reaction and agitation/pressurisation in terms of stability and physico-chemical characteristics. Two of the preparations were tested as pressure probes in MRI studies. The first preparation composed of a standard phospholipid (DSPC) filled with air or nitrogen (N2), whilst in the second method the microbubbles were composed of a fluorinated phospholipid (F-GPC) filled with a fluorocarbon saturated gas. The studies showed that whilst maintaining high sensitivity, a novel contrast agent which allows stable MRI measurements of fluid pressure over time, could be produced using lipid-coated microbubbles. The F-GPC microbubbles were found to withstand pressures up to 2.6 bar with minimal damage as opposed to the DSPC microbubbles, which were damaged at above 1.3 bar. However, it was also found that DSPC-filled with N2 microbubbles were also extremely robust to pressure and their performance was similar to that of F-GPC based microbubbles. Following on from the MRI studies, the DSPC-air and N2 filled lipid-based microbubbles were assessed for their potential activation of macrophages using in vitro models and compared to equivalent aqueous-filled liposomes. The microbubble formulations did not stimulate macrophage uptake, so studies thereafter focused on aqueous-filled liposomes. Further studies concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyldioctadecylammonium (DDA) and immunomodulatory trehalose dibehenate (TDB) with the addition of polyethylene glycol (PEG). One of the proposed hypotheses for the mechanism behind the immunostimulatory effect obtained with DDA:TDB is the ‘depot effect’ in which the liposomal carrier helps to retain the antigen at the injection site thereby increasing the time of vaccine exposure to the immune cells. The depot effect has been suggested to be primarily due to their cationic nature. Results reported within this thesis demonstrate that higher levels of PEG i.e. 25 % were able to significantly inhibit the formation of a liposome depot at the injection site and also severely limit the retention of antigen at the site. This therefore resulted in a faster drainage of the liposomes from the site of injection. The versatility of cationic liposomes based on DDA:TDB in combination with different immunostimulatory ligands including, polyinosinic-polycytidylic acid (poly (I:C), TLR 3 ligand), and CpG (TLR 9 ligand) either entrapped within the vesicles or adsorbed onto the liposome surface was investigated for immunogenic capacity as vaccine adjuvants. Small unilamellar (SUV) DDA:TDB vesicles (20-100 nm native size) with protein antigen adsorbed to the vesicle surface were the most potent in inducing both T cell (7-fold increase) and antibody (up to 2 log increase) antigen specific responses. The addition of TLR agonists poly(I:C) and CpG to SUV liposomes had small or no effect on their adjuvanticity. Finally, threitol ceramide (ThrCer), a new mmunostimulatory agent, was incorporated into the bilayers of liposomes composed of DDA or DSPC to investigate the uptake of ThrCer, by dendritic cells (DCs), and presentation on CD1d molecules to invariant natural killer T cells. These systems were prepared both as multilamellar vesicles (MLV) and Small unilamellar (SUV). It was demonstrated that the IFN-g secretion was higher for DDA SUV liposome formulation (p<0.05), suggesting that ThrCer encapsulation in this liposome formulation resulted in a higher uptake by DCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particulate delivery systems such as liposomes and polymeric nano- and microparticles are attracting great interest for developing new vaccines. Materials and formulation properties essential for this purpose have been extensively studied, but relatively little is known about the influence of the administration route of such delivery systems on the type and strength of immune response elicited. Thus, the present study aimed at elucidating the influence on the immune response when of immunising mice by different routes, such as the subcutaneous, intradermal, intramuscular, and intralymphatic routes with ovalbumin-loaded liposomes, N-trimethyl chitosan (TMC) nanoparticles, and poly(lactide-co-glycolide) (PLGA) microparticles, all with and without specifically selected immune-response modifiers. The results showed that the route of administration caused only minor differences in inducing an antibody response of the IgG1 subclass, and any such differences were abolished upon booster immunisation with the various adjuvanted and non-adjuvanted delivery systems. In contrast, the administration route strongly affected both the kinetics and magnitude of the IgG2a response. A single intralymphatic administration of all evaluated delivery systems induced a robust IgG2a response, whereas subcutaneous administration failed to elicit a substantial IgG2a response even after boosting, except with the adjuvanted nanoparticles. The intradermal and intramuscular routes generated intermediate IgG2a titers. The benefit of the intralymphatic administration route for eliciting a Th1-type response was confirmed in terms of IFN-gamma production of isolated and re-stimulated splenocytes from animals previously immunised with adjuvanted and non-adjuvanted liposomes as well as with adjuvanted microparticles. Altogether the results show that the IgG2a associated with Th1-type immune responses are sensitive to the route of administration, whereas IgG1 response associated with Th2-type immune responses were relatively insensitive to the administration route of the particulate delivery systems. The route of administration should therefore be considered when planning and interpreting pre-clinical research or development on vaccine delivery systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various monoacrylic compounds containing a hindered phenol function (e.g.3,5-di-tert.-butyl-4-hydroxy benzyl alcohol, DBBA and vinyl-3-[3',5'-di-tert.-butyl-4-hydroxy phenyl] propionate, VDBP), and a benzophenone function (2-hydroxy-4-[beta hydroxy ethoxy] benzophenone, HAEB) were synthesised and used as reactive antioxidants (AO's) for polypropylene (PP). These compounds were reacted with PP melt in the presence of low concentration of a free radical generator such a peroxide (reactive processing) to produce bound-antioxidant concentrates. The binding reaction of these AO's onto PP was found to be low and this was shown to be mainly due to competing reactions such as homopolymerisation of the antioxidant. At high concentrations of peroxide, higher binding efficiency resulted, but, this was accompanied by melt degradation of the polymer. In a special reactive processing procedure, a di- or a trifunctional reactant (referred to as coagent), e.g.tri-methylol propane tri-acrylate, Tris, and Divinyl benzene, DVB, were used with the antioxidant and this has led to an enhanced efficiency of the grating reaction of antioxidants on the polymer in the melt. The evidence suggests that this is due to copolymerisation of the antioxidants with the coagent as well as grafting of the copolymers onto the polymer backbone. Although the 'bound' AO's containing a UV stabilising function showed lower overall stabilisation effect than the unbound analogues before extraction, they were still much more effective when subjected to exhaustive solvent extraction. Furthermore, a very effective synergistic stabilising activity when two reactive AO's containing thermal and UV stabilising functions e.g. DBBA and HAEB, were reactively processed with PP in the presence of a coagent. The stabilising effectiveness of such a synergist was much higher than that of the unbound analogues both before and after extraction. Analysis using the GPC technique of concentrates containing bound-DBBA processed in the presence of Tris coagent showed higher molecular weight (Mn), compared to that of a polymer processed without the coagent, but was still lower than that of the control processed PP with no additives. This indicates that Tris coagent may inhibit further melt degradation of the polymer. Model reactions of DBBA in liquid hydrocarbon (decalin) and analysis of the products using FTIR and NMR spectroscopy showed the formation of grafted DBBA onto decalin molecules as well as homopolymerisation of the AO. In the presence of Tris coagent, copolymerisation of DBBA with the Tris inevitably occured; which was followed by grafting of the copolymer onto the decalin, FTIR and NMR results of the polymer concentrates containing bound-DBBA processed with and without Tris, showed similar behaviour as the above model reactions. This evidence supports the effect of Tris in enhancing the efficiency of the reaction of DBBA in the polymer melt. Reactive procesing of HAEB in polymer melts exhibited crosslinking formation In the early stages of the reaction, however, in the final stage, the crosslinked structure was 'broken down' or rearranged to give an almost gel free polymer with high antioxidant binding efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes a project which has investigated the evaluation of information systems. The work took place in, and is related to, a specific organisational context, that of the National Health Service (NHS). It aims to increase understanding of the evaluation which takes place in the service and the way in which this is affected by the NHS environment. It also investigates the issues which surround some important types of evaluation and their use in this context. The first stage of the project was a postal survey in which respondents were asked to describe the evaluation which took place in their authorities and to give their opinions about it. This was used to give an overview of the practice of IS evaluation in the NHS and to identify its uses and the problems experienced. Three important types of evaluation were then examined in more detail by means of action research studies. One of these dealt with the selection and purchase of a large hospital information system. The study took the form of an evaluation of the procurement process, and examined the methods used and the influence of organisational factors. The other studies are concerned with post-implementation evaluation, and examine the choice of an evaluation approach as well as its application. One was an evaluation of a community health system which had been operational for some time but was of doubtful value, and suffered from a number of problems. The situation was explored by means of a study of the costs and benefits of the system. The remaining study was the initial review of a system which was used in the administration of a Breast Screening Service. The service itself was also newly operational and the relationship between the service and the system was of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis reports of a study into the effect upon organisations of co-operative information systems (CIS) incorporating flexible communications, group support and group working technologies. A review of the literature leads to the development of a model of effect based upon co-operative business tasks. CIS have the potential to change how co-operative business tasks are carried out and their principal effect (or performance) may therefore be evaluated by determining to what extent they are being employed to perform these tasks. A significant feature of CIS use identified is the extent to which they may be designed to fulfil particular tasks, or by contrast, may be applied creatively by users in an emergent fashion to perform tasks. A research instrument is developed using a survey questionnaire to elicit users judgements of the extent to which a CIS is employed to fulfil a range of co-operative tasks. This research instrument is applied to a longitudinal study of Novell GroupWise introduction at Northamptonshire County Council during which qualitative as well as quantitative data were gathered. A method of analysis of questionnaire results using principles from fuzzy mathematics and artificial intelligence is developed and demonstrated. Conclusions from the longitudinal study include the importance of early experiences in setting patterns for use for CIS, the persistence of patterns of use over time and the dominance of designed usage of the technology over emergent use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research studies the issue of using strategic information technology for improving organisational effectiveness. It analyses different academic approaches explaining the nature of information systems and the need organisations feel of developing strategic information systems planning processes, to improve organisational effectiveness. It chooses Managerial Cybernetics as the theoretical foundation supporting development of a "Strategic Information Systems Planning" Framework, and uses it for supporting the analysis of a documented story about the process lived by the Colombian President's Office, in 1990-1992. It argues that by analysing the situation through this new analysis framework we may enlighten some previously unclear situations lived, and not yet properly explained through other approaches to strategic information systems planning. The documented history explains the organisational context and strategic postures of the Colombian President's Office and the Colombian Public Sector, at that time, as well as some of the strategic information systems defined and developed. In particular it analyses a system developed jointly by the President's Office and the National Planning Department, for measuring results of the main national development programmes. Then, it reviews these situations, in the light of the new framework and presents the main findings of the exercise. Finally, it analyses the whole research exercise, the perceived usefulness of the chosen frameworks and tools to enlighten the real situations analysed that were not clear enough, and some open research paths to follow for future researchers interested in the issue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The widespread implementation of Manufacturing Resource Planning (MRPII) systems in this country and abroad and the reported dissatisfaction with their use formed the initial basis of this piece of research which concentrates on the fundamental theory and design of the Closed Loop MRPII system itself. The dissertation concentrates on two key aspects namely; how Master Production Scheduling is carried out in differing business environments and how well the `closing of the loop' operates by checking the capcity requirements of the different levels of plans within an organisation. The main hypothesis which is tested is that in U.K. manufacturing industry, resource checks are either not being carried out satisfactorily or they are not being fed back to the appropriate plan in a timely fashion. The research methodology employed involved initial detailed investigations into Master Scheduling and capacity planning in eight diverse manufacturing companies. This was followed by a nationwide survey of users in 349 companies, a survey of all the major suppliers of Production Management software in the U.K. and an analysis of the facilities offered by current software packages. The main conclusion which is drawn is that the hypothesis is proved in the majority of companies in that only just over 50% of companies are attempting Resource and Capacity Planning and only 20% are successfully feeding back CRP information to `close the loop'. Various causative factors are put forward and remedies are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation of the different approaches used by Expert Systems researchers to solve problems in the domain of Mechanical Design and Expert Systems was carried out. The techniques used for conventional formal logic programming were compared with those used when applying Expert Systems concepts. A literature survey of design processes was also conducted with a view to adopting a suitable model of the design process. A model, comprising a variation on two established ones, was developed and applied to a problem within what are described as class 3 design tasks. The research explored the application of these concepts to Mechanical Engineering Design problems and their implementation on a microcomputer using an Expert System building tool. It was necessary to explore the use of Expert Systems in this manner so as to bridge the gap between their use as a control structure and for detailed analytical design. The former application is well researched into and this thesis discusses the latter. Some Expert System building tools available to the author at the beginning of his work were evaluated specifically for their suitability for Mechanical Engineering design problems. Microsynics was found to be the most suitable on which to implement a design problem because of its simple but powerful Semantic Net Knowledge Representation structure and the ability to use other types of representation schemes. Two major implementations were carried out. The first involved a design program for a Helical compression spring and the second a gearpair system design. Two concepts were proposed in the thesis for the modelling and implementation of design systems involving many equations. The method proposed enables equation manipulation and analysis using a combination of frames, semantic nets and production rules. The use of semantic nets for purposes other than for psychology and natural language interpretation, is quite new and represents one of the major contributions to knowledge by the author. The development of a purpose built shell program for this type of design problems was recommended as an extension of the research. Microsynics may usefully be used as a platform for this development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computerised production control developments have concentrated on Manufacturing Resources Planning (MRP II) systems. The literature suggests however, that despite the massive investment in hardware, software and management education, successful implementation of such systems in manufacturing industries has proved difficult. This thesis reviews the development of production planning and control systems, in particular, investigates the causes of failures in implementing MRP/MRP II systems in industrial environments and argues that the centralised and top-down planning structure, as well as the routine operational methodology of such systems, is inherently prone to failure. The thesis reviews the control benefits of cellular manufacturing systems but concludes that in more dynamic manufacturing environments, techniques such as Kanban are inappropriate. The basic shortcomings of MRP II systems are highlighted and a new enhanced operational methodology based on distributed planning and control principles is introduced. Distributed Manufacturing Resources Planning (DMRP), was developed as a capacity sensitive production planning and control solution for cellular manufacturing environments. The system utilises cell based, independently operated MRP II systems, integrated into a plant-wide control system through a Local Area Network. The potential benefits of adopting the system in industrial environments is discussed and the results of computer simulation experiments to compare the performance of the DMRP system against the conventional MRP II systems presented. DMRP methodology is shown to offer significant potential advantages which include ease of implementation, cost effectiveness, capacity sensitivity, shorter manufacturing lead times, lower working in progress levels and improved customer service.