59 resultados para STATIONARY PHASES
em Queensland University of Technology - ePrints Archive
Resumo:
The creation of a commercially viable and a large-scale purification process for plasmid DNA (pDNA) production requires a whole-systems continuous or semi-continuous purification strategy employing optimised stationary adsorption phase(s) without the use of expensive and toxic chemicals, avian/bovine-derived enzymes and several built-in unit processes, thus affecting overall plasmid recovery, processing time and economics. Continuous stationary phases are known to offer fast separation due to their large pore diameter making large molecule pDNA easily accessible with limited mass transfer resistance even at high flow rates. A monolithic stationary sorbent was synthesised via free radical liquid porogenic polymerisation of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with surface and pore characteristics tailored specifically for plasmid binding, retention and elution. The polymer was functionalised with an amine active group for anion-exchange purification of pDNA from cleared lysate obtained from E. coli DH5α-pUC19 pellets in RNase/protease-free process. Characterization of the resin showed a unique porous material with 70% of the pores sizes above 300 nm. The final product isolated from anion-exchange purification in only 5 min was pure and homogenous supercoiled pDNA with no gDNA, RNA and protein contamination as confirmed with DNA electrophoresis, restriction analysis and SDS page. The resin showed a maximum binding capacity of 15.2 mg/mL and this capacity persisted after several applications of the resin. This technique is cGMP compatible and commercially viable for rapid isolation of pDNA.
Resumo:
High-throughput plasmid DNA (pDNA) manufacture is obstructed predominantly by the performance of conventional stationary phases. For this reason, the search for new materials for fast chromatographic separation of pDNA is ongoing. A poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (GMA-EGDMA) monolithic material was synthesised via a thermal-free radical reaction, functionalised with different amino groups from urea, 2-chloro-N,N-diethylethylamine hydrochloride (DEAE-Cl) and ammonia in order to investigate their plasmid adsorption capacities. Physical characterisation of the monolithic polymer showed a macroporous polymer having a unimodal pore size distribution pivoted at 600 nm. Chromatographic characterisation of the functionalised polymers using pUC19 plasmid isolated from E. coli DH5α-pUC19 showed a maximum plasmid adsorption capacity of 18.73 mg pDNA/mL with a dissociation constant (KD) of 0.11 mg/mL for GMA-EGDMA/DEAE-Cl polymer. Studies on ligand leaching and degradation demonstrated the stability of GMA-EGDMA/DEAE-Cl after the functionalised polymers were contacted with 1.0 M NaOH, which is a model reagent for most 'cleaning in place' (CIP) systems. However, it is the economic advantage of an adsorbent material that makes it so attractive for commercial purification purposes. Economic evaluation of the performance of the functionalised polymers on the grounds of polymer cost (PC)/mg pDNA retained endorsed the suitability of GMA-EGDMA/DEAE-Cl polymer.
Resumo:
Increasing numbers of preclinical and clinical studies are utilizing pDNA (plasmid DNA) as the vector. In addition, there has been a growing trend towards larger and larger doses of pDNA utilized in human trials. The growing demand on pDNA manufacture leads to pressure to make more in less time. A key intervention has been the use of monoliths as stationary phases in liquid chromatography. Monolithic stationary phases offer fast separation to pDNA owing to their large pore size, making pDNA in the size range from 100 nm to over 300 nm easily accessible. However, the convective transport mechanism of monoliths does not guarantee plasmid purity. The recovery of pure pDNA hinges on a proper balance in the properties of the adsorbent phase, the mobile phase and the feedstock. The effects of pH and ionic strength of binding buffer, temperature of feedstock, active group density and the pore size of the stationary phase were considered as avenues to improve the recovery and purity of pDNA using a methacrylate-based monolithic adsorbent and Escherichia coli DH5α-pUC19 clarified lysate as feedstock. pDNA recovery was found to be critically dependent on the pH and ionic strength of the mobile phase. Up to a maximum of approx. 92% recovery was obtained under optimum conditions of pH and ionic strength. Increasing the feedstock temperature to 80°C increased the purity of pDNA owing to the extra thermal stability associated with pDNA over contaminants such as proteins. Results from toxicological studies of the plasmid samples using endotoxin standard (E. coli 0.55:B5 lipopolysaccharide) show that endotoxin level decreases with increasing salt concentration. It was obvious that large quantities of pure pDNA can be obtained with minimal extra effort simply by optimizing process parameters and conditions for pDNA purification.
Resumo:
In the field of rolling element bearing diagnostics, envelope analysis has gained in the last years a leading role among the different digital signal processing techniques. The original constraint of constant operating speed has been relaxed thanks to the combination of this technique with the computed order tracking, able to resample signals at constant angular increments. In this way, the field of application of this technique has been extended to cases in which small speed fluctuations occur, maintaining high effectiveness and efficiency. In order to make this algorithm suitable to all industrial applications, the constraint on speed has to be removed completely. In fact, in many applications, the coincidence of high bearing loads, and therefore high diagnostic capability, with acceleration-deceleration phases represents a further incentive in this direction. This chapter presents a procedure for the application of envelope analysis to speed transients. The effect of load variation on the proposed technique will be also qualitatively addressed.
Resumo:
Practitioners and academics have developed numerous maturity models for many domains in order to measure competency. These initiatives have often been influenced by the Capability Maturity Model. However, an accumulative effort has not been made to generalize the phases of developing a maturity model in any domain. This paper proposes such a methodology and outlines the main phases of generic model development. The proposed methodology is illustrated with the help of examples from two advanced maturity models in the domains of Business Process Management and Knowledge Management.
Resumo:
Experiments were undertaken to study drying kinetics of moist cylindrical shaped food particulates during fluidised bed drying. Cylindrical particles were prepared from Green beans with three different length:diameter ratios, 3:1, 2:1 and 1:1. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.
Resumo:
Changes in fluidization behaviour behaviour was characterised for parallelepiped particles with three aspect ratios, 1:1, 2:1 and 3:1 and spherical particles. All drying experiments were conducted at 500C and 15 % RH using a heat pump dehumidifier system. Fluidization experiments were undertaken for the bed heights of 100, 80, 60 and 40 mm and at 10 moisture content levels. Due to irregularities in shape minimum fluidisation velocity of parallelepiped particulates (potato) could not fitted to any empirical model. Also a generalized equation was used to predict minimum fluidization velocity. The modified quasi-stationary method (MQSM) has been proposed to describe drying kinetics of parallelepiped particulates at 30o C, 40o C and 50o C that dry mostly in the falling rate period in a batch type fluid bed dryer.
Resumo:
Experiments were undertaken to study drying kinetics of different shaped moist food particulates during heat pump assisted fluidised bed drying. Three particular geometrical shapes of parallelepiped, cylindrical and spheres were selected from potatoes (aspect ratio = 1:1, 2:1, 3:1), cut beans (length: diameter = 1:1, 2:1, 3:1) and peas respectively. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Due to complex hydrodynamics of the fluidised beds, drying kinetics are dryer or material specific. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
This paper analyzes effects of different practice task constraints on heart rate (HR) variability during 4v4 smallsided football games. Participants were sixteen football players divided into two age groups (U13, Mean age: 12.4±0.5 yrs; U15: 14.6±0.5). The task consisted of a 4v4 sub-phase without goalkeepers, on a 25x15 m field, of 15 minutes duration with an active recovery period of 6 minutes between each condition. We recorded players’ heart rates using heart rate monitors (Polar Team System, Polar Electro, Kempele, Finland) as scoring mode was manipulated (line goal: scoring by dribbling past an extended line; double goal: scoring in either of two lateral goals; and central goal: scoring only in one goal). Subsequently, %HR reserve was calculated with the Karvonen formula. We performed a time-series analysis of HR for each individual in each condition. Mean data for intra-participant variability showed that autocorrelation function was associated with more short-range dependence processes in the “line goal” condition, compared to other conditions, demonstrating that the “line goal” constraint induced more randomness in HR response. Relative to inter-individual variability, line goal constraints demonstrated lower %CV and %RMSD (U13: 9% and 19%; U15: 10% and 19%) compared with double goal (U13: 12% and 21%; U15: 12% and 21%) and central goal (U13: 14% and 24%; U15: 13% and 24%) task constraints, respectively. Results suggested that line goal constraints imposed more randomness on cardiovascular stimulation of each individual and lower inter-individual variability than double goal and central goal constraints.