47 resultados para Computerised videotape


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes work carried out to improve the fundamental modelling of liquid flows on distillation trays. A mathematical model is presented based on the principles of computerised fluid dynamics. It models the liquid flow in the horizontal directions allowing for the effects of the vapour through the use of an increased liquid turbulence, modelled by an eddy viscosity, and a resistance to liquid flow caused by the vapour being accelerated horizontally by the liquid. The resultant equations are similar to the Navier-Stokes equations with the addition of a resistance term.A mass-transfer model is used to calculate liquid concentration profiles and tray efficiencies. A heat and mass transfer analogy is used to compare theoretical concentration profiles to experimental water-cooling data obtained from a 2.44 metre diameter air-water distillation simulation rig. The ratios of air to water flow rates are varied in order to simulate three pressures: vacuum, atmospheric pressure and moderate pressure.For simulated atmospheric and moderate pressure distillation, the fluid mechanical model constantly over-predicts tray efficiencies with an accuracy of between +1.7% and +11.3%. This compares to -1.8% to -10.9% for the stagnant regions model (Porter et al. 1972) and +12.8% to +34.7% for the plug flow plus back-mixing model (Gerster et al. 1958). The model fails to predict the flow patterns and tray efficiencies for vacuum simulation due to the change in the mechanism of liquid transport, from a liquid continuous layer to a spray as the liquid flow-rate is reduced. This spray is not taken into account in the development of the fluid mechanical model. A sensitivity analysis carried out has shown that the fluid mechanical model is relatively insensitive to the prediction of the average height of clear liquid, and a reduction in the resistance term results in a slight loss of tray efficiency. But these effects are not great. The model is quite sensitive to the prediction of the eddy viscosity term. Variations can produce up to a 15% decrease in tray efficiency. The fluid mechanical model has been incorporated into a column model so that statistical optimisation techniques can be employed to fit a theoretical column concentration profile to experimental data. Through the use of this work mass-transfer data can be obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is concerned with the nature of liquid flow across industrial sieve trays operating in the spray, mixed, and the emulsified flow regimes. In order to overcome the practical difficulties of removing many samples from a commercial tray, the mass transfer process was investigated in an air water simulator column by heat transfer analogy. The temperature of the warm water was measured by many thermocouples as the water flowed across the single pass 1.2 m diameter sieve tray. The thermocouples were linked to a mini computer for the storage of the data. The temperature data were then transferred to a main frame computer to generate temperature profiles - analogous to concentration profiles. A comprehensive study of the existing tray efficiency models was carried out using computerised numerical solutions. The calculated results were compared with experimental results published by the Fractionation Research Incorporation (FRl) and the existing models did not show any agreement with the experimental results. Only the Porter and Lockett model showed a reasonable agreement with the experimental results for cenain tray efficiency values. A rectangular active section tray was constructed and tested to establish the channelling effect and the result of its effect on circular tray designs. The developed flow patterns showed predominantly flat profiles and some indication of significant liquid flow through the central region of the tray. This comfirms that the rectangular tray configuration might not be a satisfactory solution for liquid maldistribution on sieve trays. For a typical industrial tray the flow of liquid as it crosses the tray from the inlet to the outlet weir could be affected by the mixing of liquid by the eddy, momentum and the weir shape in the axial or the transverse direction or both. Conventional U-shape profiles were developed when the operating conditions were such that the froth dispersion was in the mixed regime, with good liquid temperature distribution while in the spray regime. For the 12.5 mm hole diameter tray the constant temperature profiles were found to be in the axial direction while in the spray regime and in the transverse direction for the 4.5 mm hole tray. It was observed that the extent of the liquid stagnant zones at the sides of the tray depended on the tray hole diameter and was larger for the 4.5 mm hole tray. The liquid hold-up results show a high liquid hold-up at the areas of the tray with low liquid temperatures, this supports the doubts about the assumptions of constant point efficiency across an operating tray. Liquid flow over the outlet weir showed more liquid flow at the centre of the tray at high liquid loading with low liquid flow at both ends of the weir. The calculated results of the point and tray efficiency model showed a general increase in the calculated point and tray efficiencies with an increase in the weir loading, as the flow regime changed from the spray to the mixed regime the point and the tray efficiencies increased from approximately 30 to 80%.Through the mixed flow regime the efficiencies were found to remain fairly constant, and as the operating conditions were changed to maintain an emulsified flow regime there was a decrease in the resulting efficiencies. The results of the estimated coefficient of mixing for the small and large hole diameter trays show that the extent of liquid mixing on an operating tray generally increased with increasing capacity factor, but decreased with increasing weir loads. This demonstrates that above certain weir loads, the effect of eddy diffusion mechanism on the process of liquid mixing on an operating tray to be negligible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensitive and precise radioimmunoassays for insulin and glucagon have been established. Although it was possible to employ similar precepts to the development of both hormone assays, the establishment of a reliable glucagon radioimmunoassay was complicated by the poor immunogenicity and instability of the peptide. Thus, unlike insulin antisera which were prepared by monthly injection of guinea pigs with crystalline insulin emulsified in adjuvant, the successful production of glucagon antisera was accomplished by immunisation of rabbits and guinea pigs with glucagon covalently linked to bovine plasma albumin. The conventional chloramine-T iodination with purification by gel chromatography was only suitable for the production of labelled insulin. Quality tracer for use in the glucagon radioimmunoassay was prepared by trace iodination, with subsequent purification of monoiodinated glucagon by anion exchange chromatography. Separation of free and antibody bound moieties by coated charcoal was applicable to both hormone assays, and a computerised data processing system, relying on logit-log transformation, was used to analyse all assay results. The assays were employed to evaluate the regulation of endocrine pancreatic function and the role of insulin and glucagon in the pathogenesis of the obese hyperglycaemic syndrome in mice. In the homozygous (ob/ob) condition, mice of the Birmingham strain were characterised by numerous abnormalities of glucose homeostasis, several of which were detected in heterozygous (ob/+) mice. Obese mice exhibited pancreatic alpha cell dysfunction and hyperglucagonaemia. Investigation of this defect revealed a marked insensitivity of an insulin dependent glucose sensing mechanism that inhibited glucagon secretion. Although circulating glucagon was of minor importance in the maintenance of hyperinsulinaemia, lack of suppression of alpha cell function by glucose and insulin contributed significantly to both the insulin insensitivity and the hyperglycaemia of obese mice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research investigates technology transfer (TT) to developing countries, with specific reference to South Africa. Particular attention is paid to physical asset management, which includes the maintenance of plant, equipment and facilities. The research is case based, comprising a main case study (the South African electricity utility, Eskom) and four mini-cases. A five level framework adapted from Salami and Reavill (1997) is used as the methodological basis for the formulation of the research questions. This deals with technology selection, and management issues including implementation and maintenance and evaluation and modifications. The findings suggest the Salami and Reavill (1997) framework is a useful guide for TT. The case organisations did not introduce technology for strategic advantage, but to achieve operational efficiencies through cost reduction, higher quality and the ability to meet customer demand. Acquirers favour standardised technologies with which they are familiar. Cost-benefit evaluations have limited use in technology acquisition decisions. Users rely on supplier expertise to compensate for poor education and technical training in South Africa. The impact of political and economic factors is more evident in Eskom than in the mini-cases. Physical asset management follows traditional preventive maintenance practices, with limited use of new maintenance management thinking. Few modifications of the technology or R&D innovations take place. Little use is made of explicit knowledge from computerised maintenance management systems. Low operating and maintenance skills are not conducive to the transfer of high-technology equipment. South African organisations acquire technology as items of plant, equipment and systems, but limited transfer of technology takes place. This suggests that operators and maintainers frequently do not understand the underlying technology, and like workers elsewhere, are not always inclined towards adopting technology in the workplace.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Case studies in copper-alloy rolling mill companies showed that existing planning systems suffer from numerous shortcomings. Where computerised systems are in use, these tend to simply emulate older manual systems and still rely heavily on modification by experienced planners on the shopfloor. As the size and number of orders increase, the task of process planners, while seeking to optimise the manufacturing objectives and keep within the production constraints, becomes extremely complicated because of the number of options for mixing or splitting the orders into batches. This thesis develops a modular approach to computerisation of the production management and planning functions. The full functional specification of each module is discussed, together with practical problems associated with their phased implementation. By adapting the Distributed Bill of Material concept from Material Requirements Planning (MRP) philosophy, the production routes generated by the planning system are broken down to identify the rolling stages required. Then to optimise the use of material at each rolling stage, the system generates an optimal cutting pattern using a new algorithm that produces practical solutions to the cutting stock problem. It is shown that the proposed system can be accommodated on a micro-computer, which brings it into the reach of typical companies in the copper-alloy rolling industry, where profit margins are traditionally low and the cost of widespread use of mainframe computers would be prohibitive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computerised production control developments have concentrated on Manufacturing Resources Planning (MRP II) systems. The literature suggests however, that despite the massive investment in hardware, software and management education, successful implementation of such systems in manufacturing industries has proved difficult. This thesis reviews the development of production planning and control systems, in particular, investigates the causes of failures in implementing MRP/MRP II systems in industrial environments and argues that the centralised and top-down planning structure, as well as the routine operational methodology of such systems, is inherently prone to failure. The thesis reviews the control benefits of cellular manufacturing systems but concludes that in more dynamic manufacturing environments, techniques such as Kanban are inappropriate. The basic shortcomings of MRP II systems are highlighted and a new enhanced operational methodology based on distributed planning and control principles is introduced. Distributed Manufacturing Resources Planning (DMRP), was developed as a capacity sensitive production planning and control solution for cellular manufacturing environments. The system utilises cell based, independently operated MRP II systems, integrated into a plant-wide control system through a Local Area Network. The potential benefits of adopting the system in industrial environments is discussed and the results of computer simulation experiments to compare the performance of the DMRP system against the conventional MRP II systems presented. DMRP methodology is shown to offer significant potential advantages which include ease of implementation, cost effectiveness, capacity sensitivity, shorter manufacturing lead times, lower working in progress levels and improved customer service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research began with an attempt to solve a practical problem, namely, the prediction of the rate at which an operator will learn a task. From a review of the literature, communications with researchers in this area and the study of psychomotor learning in factories it was concluded that a more fundamental approach was required which included the development of a task taxonomy. This latter objective had been researched for over twenty years by E. A. Fleishman and his approach was adopted. Three studies were carried out to develop and extend Fleishman's approach to the industrial area. However, the results of these studies were not in accord with FIeishman's conclusions and suggested that a critical re-assessment was required of the arguments, methods and procedures used by Fleishman and his co-workers. It was concluded that Fleishman's findings were to some extent an artifact of the approximate methods and procedures which he used in the original factor analyses and that using the more modern computerised factor analytic methods a reliable ability taxonomy could be developed to describe the abilities involved in the learning of psychomotor tasks. The implications for a changing-task or changing-subject model were drawn and it was concluded that a changing task and subject model needs to be developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A history of government drug regulation and the relationship between the pharmaceutical companies in the U.K. and the licensing authority is outlined. Phases of regulatory stringency are identified with the formation of the Committees on Safety of Drugs and Medicines viewed as watersheds. A study of the impact of government regulation on industrial R&D activities focuses on the effects on the rate and direction of new product innovation. A literature review examines the decline in new chemical entity innovation. Regulations are cited as a major but not singular cause of the decline. Previous research attempting to determine the causes of such a decline on an empirical basis is given and the methodological problems associated with such research are identified. The U.K. owned sector of the British pharmaceutical industry is selected for a study employing a bottom-up approach allowing disaggregation of data. A historical background to the industry is provided, with each company analysed or a case study basis. Variations between companies regarding the policies adopted for R&D are emphasised. The process of drug innovation is described in order to determine possible indicators of the rate and direction of inventive and innovative activity. All possible indicators are considered and their suitability assessed. R&D expenditure data for the period 1960-1983 is subsequently presented as an input indicator. Intermediate output indicators are treated in a similar way and patent data are identified as a readily-available and useful source. The advantages and disadvantages of using such data are considered. Using interview material, patenting policies for most of the U.K. companies are described providing a background for a patent-based study. Sources of patent data are examined with an emphasis on computerised systems. A number of searches using a variety of sources are presented. Patent family size is examined as a possible indicator of an invention's relative importance. The patenting activity of the companies over the period 1960-1983 is given and the variation between companies is noted. The relationship between patent data and other indicators used is analysed using statistical methods resulting in an apparent lack of correlation. An alternative approach taking into account variations in company policy and phases in research activity indicates a stronger relationship between patenting activity, R&D Expenditure and NCE output over the period. The relationship is not apparent at an aggregated company level. Some evidence is presented for a relationship between phases of regulatory stringency, inventive and innovative activity but the importance of other factors is emphasised.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this thesis is concerned with the dynamic behaviour of structural joints which are both loaded, and excited, normal to the joint interface. Since the forces on joints are transmitted through their interface, the surface texture of joints was carefully examined. A computerised surface measuring system was developed and computer programs were written. Surface flatness was functionally defined, measured and quantised into a form suitable for the theoretical calculation of the joint stiffness. Dynamic stiffness and damping were measured at various preloads for a range of joints with different surface textures. Dry clean and lubricated joints were tested and the results indicated an increase in damping for the lubricated joints of between 30 to 100 times. A theoretical model for the computation of the stiffness of dry clean joints was built. The model is based on the theory that the elastic recovery of joints is due to the recovery of the material behind the loaded asperities. It takes into account, in a quantitative manner, the flatness deviations present on the surfaces of the joint. The theoretical results were found to be in good agreement with those measured experimentally. It was also found that theoretical assessment of the joint stiffness could be carried out using a different model based on the recovery of loaded asperities into a spherical form. Stepwise procedures are given in order to design a joint having a particular stiffness. A theoretical model for the loss factor of dry clean joints was built. The theoretical results are in reasonable agreement with those experimentally measured. The theoretical models for the stiffness and loss factor were employed to evaluate the second natural frequency of the test rig. The results are in good agreement with the experimentally measured natural frequencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conventional methods of form-roll design and manufacture for Cold Roll-Forming of thin-walled metal sections have been entirely manual, time consuming and prone to errors, resulting in inefficiency and high production costs. With the use of computers, lead time can be significantly improved, particularly for those aspects involving routine but tedious human decisions and actions. This thesis describes the development of computer aided tools for producing form-roll designs for NC manufacture in the CAD/CAM environment. The work was undertaken to modernise the existing activity of a company manufacturing thin-walled sections. The investigated areas of the activity, including the design and drafting of the finished section, the flower patterns, the 10 to 1 templates, and the rolls complete with pinch-difference surfaces, side-rolls and extension-contours, have been successfully computerised by software development . Data generated by the developed software can be further processed for roll manufacturing using NC lathes. The software has been specially designed for portability to facilitate its implementation on different computers. The Opening-Radii method of forming was introduced as a subsitute to the conventional method for better forming. Most of the essential aspects in roll design have been successfully incorporated in the software. With computerisation, extensive standardisation in existing roll design practices and the use of more reliable and scientifically-based methods have been achieved. Satisfactory and beneficial results have also been obtained by the company in using the software through a terminal linked to the University by a GPO line. Both lead time and productivity in roll design and manufacture have been significantly improved. It is therefore concluded that computerisation in the design of form-rolls for automation by software development is viable. The work also demonstrated the promising nature of the CAD/CAM approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of the research project was to gain d complete and accurate accounting of the needs and deficiencies of materials selection and design data, with particular attention given to the feasibility of a computerised materials selection system that would include application analysis, property data and screening techniques. The project also investigates and integrates the three major aspects of materials resources, materials selection and materials recycling. Consideration of the materials resource base suggests that, though our discovery potential has increased, geologic availability is the ultimate determinant and several metals may well become scarce at the same time, thus compounding the problem of substitution. With around 2- to 20- million units of engineering materials data, the use of a computer is the only logical answer for scientific selection of materials. The system developed at Aston is used for data storage, mathematical computation and output. The system enables programs to be run in batch and interactive (on-line) mode. The program with modification can also handle such variables as quantity of mineral resources, energy cost of materials and depletion and utilisation rates of strateqic materials. The work also carries out an in-depth study of copper recycling in the U.K. and concludes that, somewhere in the region of 2 million tonnes of copper is missing from the recycling cycle. It also sets out guidelines on product design and conservation policies from the recyclability point of view.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research aims to investigate knowledge acquisition and concept formation in the domain of economics and business studies through a foreign language, English, from the very initial to the very final stage of development in the context of Higher Education in Turkey. It traces both the processes as well as the product of acquisition in order to provide a detailed picture of how knowledge acquisition occurs. It aims to explore ways in which the acquisition process can be facilitated and promoted while prospective students of the Department of Economics and Business Administration receive a language training programme, following the completion of which they will join their academic community which offers part of its courses through the English language. The study draws upon (some) theories of mental representation of knowledge, such as schema, frame and script. The concept of discourse community with its characteristics is investigated, enculturation of prospective students to acquire knowledge of their domain through L2 is explored, and the crucial role of the constructivist theory in relation to knowledge acquisition is highlighted. The present study was conducted through a process of enculturation taking place partly at the language centre of Çukurova University and partly at the target discourse community. The data utilised for initiating knowledge acquisition was obtained by establishing a corpus of economics and business texts, which the learners are expected to read during their academic courses utilising computerised technology. The method of think aloud protocols was used to analyse processes taking place in knowledge acquisition, while the product of what was acquired was investigated by means of written recall protocols. It has been discovered that knowledge acquisition operates on the basis of analogical and to a certain extent metaphorical reasoning. The evidence obtained from the think aloud protocols showed that neophytes were able to acquire fundamental concepts of their future domain by reaching the level of shared understanding with the members of their target community of the faculty. Diaries and questionnaire analyses demonstrated that enculturation facilitated learners' transition from the language centre into the target community. Analyses of the written recall protocols and examinations from the post-enculturation stage of the research showed that neophytes' academic performances in their target community were much higher than those of their non-enculturated counterparts. Processes learners go through and strategies they spontaneously make use of, especially while acquiring knowledge of a specific domain through L2 have so far remained unexplored research areas. The present research makes a potential contribution to the language and knowledge acquisition theories by examining closely and systematically the language and the strategies they employ in acquiring such knowledge. The research findings offer useful implications to English language teaching at language schools. Language teachers are provided with useful guidelines as to how they can provide prospective students of a particular academic community with an experience of acquiring fundamental concepts of their discipline before they become members of their target community.