33 resultados para scalability
em Aston University Research Archive
Resumo:
Most previous studies of university spinouts (USOs) have focused on what determines their formation from the perspectives of the entrepreneurs or of their parent universities. However, few studies have investigated how these entrepreneurial businesses actually grow and how their business models evolve in the process. This paper examines the evolution of USOs' business models over their different development phases. Using empirical evidence gathered from three comprehensive case studies, we explore how USOs' business models evolve over time, and the implications for the financial sustainability and operational scalability of these ventures. This paper extends existing research on the development of USOs, and highlights three themes for future research.
Resumo:
Background aims: The selection of medium and associated reagents for human mesenchymal stromal cell (hMSC) culture forms an integral part of manufacturing process development and must be suitable for multiple process scales and expansion technologies. Methods: In this work, we have expanded BM-hMSCs in fetal bovine serum (FBS)- and human platelet lysate (HPL)-containing media in both a monolayer and a suspension-based microcarrier process. Results: The introduction of HPL into the monolayer process increased the BM-hMSC growth rate at the first experimental passage by 0.049 day and 0.127/day for the two BM-hMSC donors compared with the FBS-based monolayer process. This increase in growth rate in HPL-containing medium was associated with an increase in the inter-donor consistency, with an inter-donor range of 0.406 cumulative population doublings after 18 days compared with 2.013 in FBS-containing medium. Identity and quality characteristics of the BM-hMSCs are also comparable between conditions in terms of colony-forming potential, osteogenic potential and expression of key genes during monolayer and post-harvest from microcarrier expansion. BM-hMSCs cultured on microcarriers in HPL-containing medium demonstrated a reduction in the initial lag phase for both BM-hMSC donors and an increased BM-hMSC yield after 6 days of culture to 1.20 ± 0.17 × 105 and 1.02 ± 0.005 × 105 cells/mL compared with 0.79 ± 0.05 × 105 and 0.36 ± 0.04 × 105 cells/mL in FBS-containing medium. Conclusions: This study has demonstrated that HPL, compared with FBS-containing medium, delivers increased growth and comparability across two BM-hMSC donors between monolayer and microcarrier culture, which will have key implications for process transfer during scale-up.
Resumo:
In this paper we describe the design and fabrication of a mechanical autonomous impact oscillator with a MEMS resonator as the frequency control element. The design has been developed with scalability to large 2-D arrays of coupled oscillators in mind. The dynamic behaviour of the impact oscillator was numerically studied and it was found that the geometry nonlinearity has an effect on the static pull-in voltage and equilibrium position. The external driving power can alter the frequency of the impact oscillator. The autonomous nature of the oscillator simplifies the complexity of the drive circuitry and is essential for large 2-D arrays.
Resumo:
Recent developments in service-oriented and distributed computing have created exciting opportunities for the integration of models in service chains to create the Model Web. This offers the potential for orchestrating web data and processing services, in complex chains; a flexible approach which exploits the increased access to products and tools, and the scalability offered by the Web. However, the uncertainty inherent in data and models must be quantified and communicated in an interoperable way, in order for its effects to be effectively assessed as errors propagate through complex automated model chains. We describe a proposed set of tools for handling, characterizing and communicating uncertainty in this context, and show how they can be used to 'uncertainty- enable' Web Services in a model chain. An example implementation is presented, which combines environmental and publicly-contributed data to produce estimates of sea-level air pressure, with estimates of uncertainty which incorporate the effects of model approximation as well as the uncertainty inherent in the observational and derived data.
Resumo:
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.
Resumo:
Purpose: The purpose of this paper is to investigate the use of 802.11e MAC to resolve the transmission control protocol (TCP) unfairness. Design/methodology/approach: The paper shows how a TCP sender may adapt its transmission rate using the number of hops and the standard deviation of recently measured round-trip times to address the TCP unfairness. Findings: Simulation results show that the proposed techniques provide even throughput by providing TCP fairness as the number of hops increases over a wireless mesh network (WMN). Research limitations/implications: Future work will examine the performance of TCP over routing protocols, which use different routing metrics. Other future work is scalability over WMNs. Since scalability is a problem with communication in multi-hop, carrier sense multiple access (CSMA) will be compared with time division multiple access (TDMA) and a hybrid of TDMA and code division multiple access (CDMA) will be designed that works with TCP and other traffic. Finally, to further improve network performance and also increase network capacity of TCP for WMNs, the usage of multiple channels instead of only a single fixed channel will be exploited. Practical implications: By allowing the tuning of the 802.11e MAC parameters that have previously been constant in 802.11 MAC, the paper proposes the usage of 802.11e MAC on a per class basis by collecting the TCP ACK into a single class and a novel congestion control method for TCP over a WMN. The key feature of the proposed TCP algorithm is the detection of congestion by measuring the fluctuation of RTT of the TCP ACK samples via the standard deviation, plus the combined the 802.11e AIFS and CWmin allowing the TCP ACK to be prioritised which allows the TCP ACKs will match the volume of the TCP data packets. While 802.11e MAC provides flexibility and flow/congestion control mechanism, the challenge is to take advantage of these features in 802.11e MAC. Originality/value: With 802.11 MAC not having flexibility and flow/congestion control mechanisms implemented with TCP, these contribute to TCP unfairness with competing flows. © Emerald Group Publishing Limited.
Resumo:
Two key issues defined the focus of this research in manufacturing plasmid DNA for use In human gene therapy. First, the processing of E.coli bacterial cells to effect the separation of therapeutic plasmid DNA from cellular debris and adventitious material. Second, the affinity purification of the plasmid DNA in a Simple one-stage process. The need arises when considering the concerns that have been recently voiced by the FDA concerning the scalability and reproducibility of the current manufacturing processes in meeting the quality criteria of purity, potency, efficacy, and safety for a recombinant drug substance for use in humans. To develop a preliminary purification procedure, an EFD cross-flow micro-filtration module was assessed for its ability to effect the 20-fold concentration, 6-time diafiltration, and final clarification of the plasmid DNA from the subsequent cell lysate that is derived from a 1 liter E.coli bacterial cell culture. Historically, the employment of cross-flow filtration modules within procedures for harvesting cells from bacterial cultures have failed to reach the required standards dictated by existing continuous centrifuge technologies, frequently resulting in the rapid blinding of the membrane with bacterial cells that substantially reduces the permeate flux. By challenging the EFD module, containing six helical wound tubular membranes promoting centrifugal instabilities known as Dean vortices, with distilled water between the Dean number's of 187Dn and 818Dn,and the transmembrane pressures (TMP) of 0 to 5 psi. The data demonstrated that the fluid dynamics significantly influenced the permeation rate, displaying a maximum at 227Dn (312 Imh) and minimum at 818Dn (130 Imh) for a transmembrane pressure of 1 psi. Numerical studies indicated that the initial increase and subsequent decrease resulted from a competition between the centrifugal and viscous forces that create the Dean vortices. At Dean numbers between 187Dn and 227Dn , the forces combine constructively to increase the apparent strength and influence of the Dean vortices. However, as the Dean number in increases above 227 On the centrifugal force dominates the viscous forces, compressing the Dean vortices into the membrane walls and reducing their influence on the radial transmembrane pressure i.e. the permeate flux reduced. When investigating the action of the Dean vortices in controlling tile fouling rate of E.coli bacterial cells, it was demonstrated that the optimum cross-flow rate at which to effect the concentration of a bacterial cell culture was 579Dn and 3 psi TMP, processing in excess of 400 Imh for 20 minutes (i.e., concentrating a 1L culture to 50 ml in 10 minutes at an average of 450 Imh). The data demonstrated that there was a conflict between the Dean number at which the shear rate could control the cell fouling, and the Dean number at which tile optimum flux enhancement was found. Hence, the internal geometry of the EFD module was shown to sub-optimal for this application. At 579Dn and 3 psi TMP, the 6-fold diafiltration was shown to occupy 3.6 minutes of process time, processing at an average flux of 400 Imh. Again, at 579Dn and 3 psi TMP the clarification of the plasmid from tile resulting freeze-thaw cell lysate was achieved at 120 Iml1, passing 83% (2,5 mg) of the plasmid DNA (6,3 ng μ-1 10.8 mg of genomic DNA (∼23,00 Obp, 36 ng μ-1 ), and 7.2 mg of cellular proteins (5-100 kDa, 21.4 ngμ-1 ) into the post-EFD process stream. Hence the EFD module was shown to be effective, achieving the desired objectives in approximately 25 minutes. On the basis of its ability to intercalate into low molecular weight dsDNA present in dilute cell lysates, and be electrophoresed through agarose, the fluorophore PicoGreen was selected for the development of a suitable dsDNA assay. It was assesseel for its accuracy, and reliability, In determining the concentration and identity of DNA present in samples that were eleclrophoresed through agarose gels. The signal emitted by intercalated PicoGreen was shown to be constant and linear, and that the mobility of the PicaGreen-DNA complex was not affected by the intercalation. Concerning the secondary purification procedure, various anion-exchange membranes were assessed for their ability to capture plasmid DNA from the post-EFD process stream. For a commercially available Sartorius Sartobind Q15 membrane, the reduction in the equilibriumbinding capacity for ctDNA in buffer of increasing ionic demonstrated that DNA was being.adsorbed by electrostatic interactions only. However, the problems associated with fluid distribution across the membrane demonstrated that the membrane housing was the predominant cause of the .erratic breakthrough curves. Consequently, this would need to be rectified before such a membrane could be integrated into the current system, or indeed be scaled beyond laboratory scale. However, when challenged with the process material, the data showed that considerable quantities of protein (1150 μg) were adsorbed preferentially to the plasmid DNA (44 μg). This was also shown for derived Pall Gelman UltraBind US450 membranes that had been functionalised by varying molecular weight poly-L~lysine and polyethyleneimine ligands. Hence the anion-exchange membranes were shown to be ineffective in capturing plasmid DNA from the process stream. Finally, work was performed to integrate a sequence-specific DNA·binding protein into a single-stage DNA chromatography, isolating plasmid DNA from E.coli cells whilst minimising the contamination from genomic DNA and cellular protein. Preliminary work demonstrated that the fusion protein was capable of isolating pUC19 DNA into which the recognition sequence for the fusion-protein had been inserted (pTS DNA) when in the presence of the conditioned process material. Althougth the pTS recognition sequence differs from native pUC19 sequences by only 2 bp, the fusion protein was shown to act as a highly selective affinity ligand for pTS DNA alone. Subsequently, the scale of the process was scaled 25-fold and positioned directly following the EFD system. In conclusion, the integration of the EFD micro-filtration system and zinc-finger affinity purification technique resulted in the capture of approximately 1 mg of plasmid DNA was purified from 1L of E.coli culture in a simple two stage process, resulting in the complete removal of genomic DNA and 96.7% of cellular protein in less than 1 hour of process time.
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
In this paper a Markov chain based analytical model is proposed to evaluate the slotted CSMA/CA algorithm specified in the MAC layer of IEEE 802.15.4 standard. The analytical model consists of two two-dimensional Markov chains, used to model the state transition of an 802.15.4 device, during the periods of a transmission and between two consecutive frame transmissions, respectively. By introducing the two Markov chains a small number of Markov states are required and the scalability of the analytical model is improved. The analytical model is used to investigate the impact of the CSMA/CA parameters, the number of contending devices, and the data frame size on the network performance in terms of throughput and energy efficiency. It is shown by simulations that the proposed analytical model can accurately predict the performance of slotted CSMA/CA algorithm for uplink, downlink and bi-direction traffic, with both acknowledgement and non-acknowledgement modes.
Resumo:
Multiwavelength all-optical regeneration has the potential to substantially increase both the capacity and scalability of future optical networks. In this paper, we review recent promising developments in this area. First, we recall the basic principles of multichannel regeneration of high bit rate signals in optical communication systems before discussing the current technological approaches. We then describe in detail two fiber-based multichannel 2R regeneration techniques for return-to-zero-on-off keying based on 1) dispersion managed systems and 2) direction and polarization multiplexing. We present results illustrating the levels of performance so far achieved and discuss various practical issues and prospects for further performance enhancement.
Resumo:
The performance of wireless networks is limited by multiple access interference (MAI) in the traditional communication approach where the interfered signals of the concurrent transmissions are treated as noise. In this paper, we treat the interfered signals from a new perspective on the basis of additive electromagnetic (EM) waves and propose a network coding based interference cancelation (NCIC) scheme. In the proposed scheme, adjacent nodes can transmit simultaneously with careful scheduling; therefore, network performance will not be limited by the MAI. Additionally we design a space segmentation method for general wireless ad hoc networks, which organizes network into clusters with regular shapes (e.g., square and hexagon) to reduce the number of relay nodes. The segmentation methodworks with the scheduling scheme and can help achieve better scalability and reduced complexity. We derive accurate analytic models for the probability of connectivity between two adjacent cluster heads which is important for successful information relay. We proved that with the proposed NCIC scheme, the transmission efficiency can be improved by at least 50% for general wireless networks as compared to the traditional interference avoidance schemes. Numeric results also show the space segmentation is feasible and effective. Finally we propose and discuss a method to implement the NCIC scheme in a practical orthogonal frequency division multiplexing (OFDM) communications networks. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Existing wireless systems are normally regulated by a fixed spectrum assignment strategy. This policy leads to an undesirable situation that some systems may only use the allocated spectrum to a limited extent while others have very serious spectrum insufficiency situation. Dynamic Spectrum Access (DSA) is emerging as a promising technology to address this issue such that the unused licensed spectrum can be opportunistically accessed by the unlicensed users. To enable DSA, the unlicensed user shall have the capability of detecting the unoccupied spectrum, controlling its spectrum access in an adaptive manner, and coexisting with other unlicensed users automatically. In this article, we propose a radio system Transmission Opportunity-based spectrum access control protocol with the aim to improve spectrum access fairness and ensure safe coexistence of multiple heterogeneous unlicensed radio systems. In the scheme, multiple radio systems will coexist and dynamically use available free spectrum without interfering with licensed users. Simulation is carried out to evaluate the performance of the proposed scheme with respect to spectrum utilisation, fairness and scalability. Comparing with the existed studies, our strategy is able to achieve higher scalability and controllability without degrading spectrum utilisation and fairness performance.
Resumo:
Field experiments of 42.7/128.1 Gb/s wavelength-division multiplexed, optical time-division multiplexed (WDM-OTDM) transmultiplexing and all-optical dual-wavelength regeneration at the OTDM rate are presented in this paper. By using the asynchronous retiming scheme, we achieve error-free bufferless data grooming with time-slot interchange capability for OTDM meshed networking. We demonstrate excellent performance from the system, discuss scalability, applicability, and the potential reach of the asynchronous retiming scheme for transparent OTDM-domain interconnection.