47 resultados para scalable
em Aston University Research Archive
Resumo:
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
The work described in this thesis focuses on the use of a design-of-experiments approach in a multi-well mini-bioreactor to enable the rapid establishments of high yielding production phase conditions in yeast, which is an increasingly popular host system in both academic and industrial laboratories. Using green fluorescent protein secreted from the yeast, Pichia pastoris, a scalable predictive model of protein yield per cell was derived from 13 sets of conditions each with three factors (temperature, pH and dissolved oxygen) at 3 levels and was directly transferable to a 7 L bioreactor. This was in clear contrast to the situation in shake flasks, where the process parameters cannot be tightly controlled. By further optimisating both the accumulation of cell density in batch and improving the fed-batch induction regime, additional yield improvement was found to be additive to the per cell yield of the model. A separate study also demonstrated that improving biomass improved product yield in a second yeast species, Saccharomyces cerevisiae. Investigations of cell wall hydrophobicity in high cell density P. pastoris cultures indicated that cell wall hydrophobin (protein) compositional changes with growth phase becoming more hydrophobic in log growth than in lag or stationary phases. This is possibly due to an increased occurrence of proteins associated with cell division. Finally, the modelling approach was validated in mammalian cells, showing its flexibility and robustness. In summary, the strategy presented in this thesis has the benefit of reducing process development time in recombinant protein production, directly from bench to bioreactor.
Resumo:
In this paper a Markov chain based analytical model is proposed to evaluate the slotted CSMA/CA algorithm specified in the MAC layer of IEEE 802.15.4 standard. The analytical model consists of two two-dimensional Markov chains, used to model the state transition of an 802.15.4 device, during the periods of a transmission and between two consecutive frame transmissions, respectively. By introducing the two Markov chains a small number of Markov states are required and the scalability of the analytical model is improved. The analytical model is used to investigate the impact of the CSMA/CA parameters, the number of contending devices, and the data frame size on the network performance in terms of throughput and energy efficiency. It is shown by simulations that the proposed analytical model can accurately predict the performance of slotted CSMA/CA algorithm for uplink, downlink and bi-direction traffic, with both acknowledgement and non-acknowledgement modes.
Resumo:
In this paper we propose a 2R regeneration scheme based on a nonlinear optical loop mirror (NOLM) and optical filtering. We numerically investigate wavelength-division multiplexing (WDM) operation at a channel bit rate of 40 Gbit/s. In distinction to our previous work, we focus here on the regenerative characteristics and signal quality after a single transmission section, whose length is varied from 200 to 1000 km. © 2003 IEEE.
Resumo:
We propose a 2R regeneration scheme based on a nonlinear optical loop mirror and optical filtering. The feasibility of wavelength-division multiplexing operation at 40 Gbit/s is numerically demonstrated. We examine the characteristics of one-step regeneration and discuss networking applications.
Resumo:
To guarantee QoS for multicast transmission, admission control for multicast sessions is expected. Probe-based multicast admission control (PBMAC) scheme is a scalable and simple approach. However, PBMAC suffers from the subsequent request problem which can significantly reduce the maximum number of multicast sessions that a network can admit. In this letter, we describe the subsequent request problem and propose an enhanced PBMAC scheme to solve this problem. The enhanced scheme makes use of complementary probing and remarking which require only minor modification to the original scheme. By using a fluid-based analytical model, we are able to prove that the enhanced scheme can always admit a higher number of multicast sessions. Furthermore, we present validation of the analytical model using packet based simulation. Copyright © 2005 The Institute of Electronics, Information and Communication Engineers.
Resumo:
The use of hMSCs for allogeneic therapies requiring lot sizes of billions of cells will necessitate large-scale culture techniques such as the expansion of cells on microcarriers in bioreactors. Whilst much research investigating hMSC culture on microcarriers has focused on growth, much less involves their harvesting for passaging or as a step towards cryopreservation and storage. A successful new harvesting method has recently been outlined for cells grown on SoloHill microcarriers in a 5L bioreactor [1]. Here, this new method is set out in detail, harvesting being defined as a two-step process involving cell 'detachment' from the microcarriers' surface followed by the 'separation' of the two entities. The new detachment method is based on theoretical concepts originally developed for secondary nucleation due to agitation. Based on this theory, it is suggested that a short period (here 7min) of intense agitation in the presence of a suitable enzyme should detach the cells from the relatively large microcarriers. In addition, once detached, the cells should not be damaged because they are smaller than the Kolmogorov microscale. Detachment was then successfully achieved for hMSCs from two different donors using microcarrier/cell suspensions up to 100mL in a spinner flask. In both cases, harvesting was completed by separating cells from microcarriers using a Steriflip® vacuum filter. The overall harvesting efficiency was >95% and after harvesting, the cells maintained all the attributes expected of hMSC cells. The underlying theoretical concepts suggest that the method is scalable and this aspect is discussed too. © 2014 The Authors.
Resumo:
Computational and communication complexities call for distributed, robust, and adaptive control. This paper proposes a promising way of bottom-up design of distributed control in which simple controllers are responsible for individual nodes. The overall behavior of the network can be achieved by interconnecting such controlled loops in cascade control for example and by enabling the individual nodes to share information about data with their neighbors without aiming at unattainable global solution. The problem is addressed by employing a fully probabilistic design, which can cope with inherent uncertainties, that can be implemented adaptively and which provide a systematic rich way to information sharing. This paper elaborates the overall solution, applies it to linear-Gaussian case, and provides simulation results.
Resumo:
The development of novel, affordable and efficacious therapeutics will be necessary to ensure the continued progression in the standard of global healthcare. With the potential to address previously unmet patient needs as well as tackling the social and economic effects of chronic and age-related conditions, cell therapies will lead the new generation of healthcare products set to improve health and wealth across the globe. However, if many of the small to medium enterprises (SMEs) engaged in much of the commercialization efforts are to successfully traverse the ‘Valley of Death’ as they progress through clinical trials, there are a number of challenges that must be overcome. No longer do the challenges remain biological but rather a series of engineering and manufacturing issues must also be considered and addressed.
Resumo:
This report presents and evaluates a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The benefits of the idea of MP performed in the transform domain are analysed in detail. The main contribution of this work is extending MP with wavelets to colour coding and proposing a coding method. We exploit correlations between image subbands after wavelet transformation in RGB colour space. Then, a new and simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE), inspired by the idea of coding indexes in relational databases, is applied. As a final coding step arithmetic coding is used assuming uniform distributions of MP atom parameters. The target application is compression at low and medium bit-rates. Coding performance is compared to JPEG 2000 showing the potential to outperform the latter with more sophisticated than uniform data models for arithmetic coder. The results are presented for grayscale and colour coding of 12 standard test images.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
Through the application of novel signal processing techniques we are able to measure physical measurands with both high accuracy and low noise susceptibility. The first interrogation scheme is based upon a CCD spectrometer. We compare different algorithms for resolving the Bragg wavelength from a low resolution discrete representation of the reflected spectrum, and present optimal processing methods for providing a high integrity measurement from the reflection image. Our second sensing scheme uses a novel network of sensors to measure the distributive strain response of a mechanical system. Using neural network processing methods we demonstrate the measurement capabilities of a scalable low-cost fibre Bragg grating sensor network. This network has been shown to be comparable with the performance of existing fibre Bragg grating sensing techniques, at a greatly reduced implementation cost.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.