890 resultados para System modelling
Resumo:
This thesis presents a theoretical investigation of the application of advanced modelling formats in high-speed fibre lightwave systems. The first part of this work focuses on numerical optimisation of dense wavelength division multiplexing (DWDM) system design. We employ advanced spectral domain filtering techniques and carrier pulse reshaping. We then apply these optimisation methods to investigate spectral and temporal domain characteristics of advanced modulation formats in fibre optic telecommunication systems. Next we investigate numerical methods used in detecting and measuring the system performance of advanced modulation formats. We then numerically study the combination of return-to-zero differential phase-shift keying (RZ-DPSK) with advanced photonic devices. Finally we analyse the dispersion management of Nx40 Gbit/s RZ-DPSK transmission applied to a commercial terrestrial lightwave system.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
The objective of this work has been to investigate the principle of combined bioreaction and separation in a simulated counter-current chromatographic bioreactor-separator system (SCCR-S). The SCCR-S system consisted of twelve 5.4cm i.d x 75cm long columns packed with calcium charged cross-linked polystyrene resin. Three bioreactions, namely the saccharification of modified starch to maltose and dextrin using the enzyme maltogenase, the hydrolysis of lactose to galactose and glucose in the presence of the enzyme lactase and the biosynthesis of dextran from sucrose using the enzyme dextransucrase. Combined bioreaction and separation has been successfully carried out in the SCCR-S system for the saccharification of modified starch to maltose and dextrin. The effects of the operating parameters (switch time, eluent flowrate, feed concentration and enzyme activity) on the performance of the SCCR-S system were investigated. By using an eluent of dilute enzyme solution, starch conversions of up to 60% were achieved using lower amounts of enzyme than the theoretical amount required by a conventional bioreactor to produce the same amount of maltose over the same time period. Comparing the SCCR-S system to a continuous annular chromatograph (CRAC) for the saccharification of modified starch showed that the SCCR-S system required only 34.6-47.3% of the amount of enzyme required by the CRAC. The SCCR-S system was operated in the batch and continuous modes as a bioreactor-separator for the hydrolysis of lactose to galactose and glucose. By operating the system in the continuous mode, the operating parameters were further investigated. During these experiments the eluent was deionised water and the enzyme was introduced into the system through the same port as the feed. The galactose produced was retarded and moved with the stationary phase to be purge as the galactose rich product (GalRP) while the glucose moved with the mobile phase and was collected as the glucose rich product (GRP). By operating at up to 30%w/v lactose feed concentrations, complete conversions were achieved using only 48% of the theoretical amount of enzyme required by a conventional bioreactor to hydrolyse the same amount of glucose over the same time period. The main operating parameters affecting the performance of the SCCR-S system operating in the batch mode were investigated and the results compared to those of the continuous operation of the SCCR-S system. . During the biosynthesis of dextran in the SCCR-S system, a method of on-line regeneration of the resin was required to operate the system continuously. Complete conversion was achieved at sucrose feed concentrations of 5%w/v with fructose rich. products (FRP) of up to 100% obtained. The dextran rich products were contaninated by small amounts of glucose and levan formed during the bioreaction. Mathematical modelling and computer simulation of the SCCR-S. system operating in the continuous mode for the hydrolysis of lactose has been carried out. .
Resumo:
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
An uptake system was developed using Caco-2 cell monolayers and the dipeptide, glycyl-[3H]L-proline, as a probe compound. Glycyl-[3H]L-proline uptake was via the di-/tripeptide transport system (DTS) and, exhibited concentration-, pH- and temperature-dependency. Dipeptides inhibited uptake of the probe, and the design of the system allowed competitors to be ranked against one another with respect to affinity for the transporter. The structural features required to ensure or increase interaction with the DTS were defined by studying the effect of a series of glycyl-L-proline and angiotensin-converting enzyme (ACE)-inhibitor (SQ-29852) analogues on the uptake of the probe. The SQ-29852 structure was divided into six domains (A-F) and competitors were grouped into series depending on structural variations within specific regions. Domain A was found to prefer a hydrophobic function, such as a phenyl group, and was intolerant to positive charges and H+ -acceptors and donors. SQ-29852 analogues were more tolerant of substitutions in the C domain, compared to glycyl-L-proline analogues, suggesting that interactions along the length of the SQ-29852 molecule may override the effects of substitutions in the C domain. SQ-29852 analogues showed a preference for a positive function, such as an amine group in this region, but dipeptide structures favoured an uncharged substitution. Lipophilic substituents in domain D increased affinity of SQ-29852 analogues with the DTS. A similar effect was observed for ACE-NEP inhibitor analogues. Domain E, corresponding to the carboxyl group was found to be tolerant of esterification for SQ-29852 analogues but not for dipeptides. Structural features which may increase interaction for one series of compounds, may not have the same effect for another series, indicating that the presence of multiple recognition sites on a molecule may override the deleterious effect of anyone change. Modifying current, poorly absorbed peptidomimetic structures to fit the proposed hypothetical model may improve oral bioavailability by increasing affinity for the DTS. The stereochemical preference of the transporter was explored using four series of compounds (SQ-29852, lysylproline, alanylproline and alanylalanine enantiomers). The L, L stereochemistry was the preferred conformation for all four series, agreeing with previous studies. However, D, D enantiomers were shown in some cases to be substrates for the DTS, although exhibiting a lower affinity than their L, L counterparts. All the ACE-inhibitors and β-lactam antibiotics investigated, produced a degree of inhibition of the probe, and thus show some affinity for the DTS. This contrasts with previous reports that found several ACE inhibitors to be absorbed via a passive process, thus suggesting that compounds are capable of binding to the transporter site and inhibiting the probe without being translocated into the cell. This was also shown to be the case for oligodeoxynucleotide conjugated to a lipophilic group (vitamin E), and highlights the possibility that other orally administered drug candidates may exert non-specific effects on the DTS and possibly have a nutritional impact. Molecular modelling of selected ACE-NEP inhibitors revealed that the three carbonyl functions can be oriented in a similar direction, and this conformation was found to exist in a local energy-minimised state, indicating that the carbonyls may possibly be involved in hydrogen-bond formation with the binding site of the DTS.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
This thesis deals with the problem of Information Systems design for Corporate Management. It shows that the results of applying current approaches to Management Information Systems and Corporate Modelling fully justify a fresh look to the problem. The thesis develops an approach to design based on Cybernetic principles and theories. It looks at Management as an informational process and discusses the relevance of regulation theory to its practice. The work proceeds around the concept of change and its effects on the organization's stability and survival. The idea of looking at organizations as viable systems is discussed and a design to enhance survival capacity is developed. It takes Ashby's theory of adaptation and developments on ultra-stability as a theoretical framework and considering conditions for learning and foresight deduces that a design should include three basic components: A dynamic model of the organization- environment relationships; a method to spot significant changes in the value of the essential variables and in a certain set of parameters; and a Controller able to conceive and change the other two elements and to make choices among alternative policies. Further considerations of the conditions for rapid adaptation in organisms composed of many parts, and the law of Requisite Variety determine that successful adaptive behaviour requires certain functional organization. Beer's model of viable organizations is put in relation to Ashby's theory of adaptation and regulation. The use of the Ultra-stable system as abstract unit of analysis permits developing a rigorous taxonomy of change; it starts distinguishing between change with in behaviour and change of behaviour to complete the classification with organizational change. It relates these changes to the logical categories of learning connecting the topic of Information System design with that of organizational learning.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
River basin surveillance using remotely sensed data: a water resources information management system
Resumo:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
This thesis reports the results of DEM (Discrete Element Method) simulations of rotating drums operated in a number of different flow regimes. DEM simulations of drum granulation have also been conducted. The aim was to demonstrate that a realistic simulation is possible, and further understanding of the particle motion and granulation processes in a rotating drum. The simulation model has shown good qualitative and quantitative agreement with other published experimental results. A two-dimensional bed of 5000 disc particles, with properties similar to glass has been simulated in the rolling mode (Froude number 0.0076) with a fractional drum fill of approximately 30%. Particle velocity fields in the cascading layer, bed cross-section, and at the drum wall have shown good agreement with experimental PEPT data. Particle avalanches in the cascading layer have been shown to be consistent with single layers of particles cascading down the free surface towards the drum wall. Particle slip at the drum wall has been shown to depend on angular position, and ranged from 20% at the toe and shoulder, to less than 1% at the mid-point. Three-dimensional DEM simulations of a moderately cascading bed of 50,000 spherical elastic particles (Froude number 0.83) with a fractional fill of approximately 30% have also been performed. The drum axis was inclined by 50 to the horizontal with periodic boundaries at the ends of the drum. The mean period of bed circulation was found to be 0.28s. A liquid binder was added to the system using a spray model based on the concept of a wet surface energy. Granule formation and breakage processes have been demonstrated in the system.
Resumo:
Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.
Resumo:
The work described in this thesis focuses on the use of a design-of-experiments approach in a multi-well mini-bioreactor to enable the rapid establishments of high yielding production phase conditions in yeast, which is an increasingly popular host system in both academic and industrial laboratories. Using green fluorescent protein secreted from the yeast, Pichia pastoris, a scalable predictive model of protein yield per cell was derived from 13 sets of conditions each with three factors (temperature, pH and dissolved oxygen) at 3 levels and was directly transferable to a 7 L bioreactor. This was in clear contrast to the situation in shake flasks, where the process parameters cannot be tightly controlled. By further optimisating both the accumulation of cell density in batch and improving the fed-batch induction regime, additional yield improvement was found to be additive to the per cell yield of the model. A separate study also demonstrated that improving biomass improved product yield in a second yeast species, Saccharomyces cerevisiae. Investigations of cell wall hydrophobicity in high cell density P. pastoris cultures indicated that cell wall hydrophobin (protein) compositional changes with growth phase becoming more hydrophobic in log growth than in lag or stationary phases. This is possibly due to an increased occurrence of proteins associated with cell division. Finally, the modelling approach was validated in mammalian cells, showing its flexibility and robustness. In summary, the strategy presented in this thesis has the benefit of reducing process development time in recombinant protein production, directly from bench to bioreactor.