938 resultados para pore systems
Resumo:
The preparation of macroporous methacrylate monolithic material with controlled pore structures can be carried out in an unstirred mould through careful and precise control of the polymerisation kinetics and parameters. Contemporary synthesis conditions of methacrylate monolithic polymers are based on existing polymerisation schemes without an in-depth understanding of the dynamics of pore structure and formation. This leads to poor performance in polymer usage thereby affecting final product recovery and purity, retention time, productivity and process economics. The unique porosity of methacrylate monolithic polymer which propels its usage in many industrial applications can be controlled easily during its preparation. Control of the kinetics of the overall process through changes in reaction time, temperature and overall composition such as cross-linker and initiator contents allow the fine tuning of the macroporous structure and provide an understanding of the mechanism of pore formation within the unstirred mould. The significant effect of temperature of the reaction kinetics serves as an effectual means to control and optimise the pore structure and allows the preparation of polymers with different pore size distributions from the same composition of the polymerisation mixture. Increasing the concentration of the cross-linking monomer affects the composition of the final monoliths and also decreases the average pore size as a result of pre-mature formation of highly cross-linked globules with a reduced propensity to coalesce. The choice and concentration of porogen solvent is also imperative. Different porogens and porogen mixtures present different pore structure output. Example, larger pores are obtained in a poor solvent due to early phase separation.
Resumo:
High-throughput plasmid DNA (pDNA) manufacture is obstructed predominantly by the performance of conventional stationary phases. For this reason, the search for new materials for fast chromatographic separation of pDNA is ongoing. A poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (GMA-EGDMA) monolithic material was synthesised via a thermal-free radical reaction, functionalised with different amino groups from urea, 2-chloro-N,N-diethylethylamine hydrochloride (DEAE-Cl) and ammonia in order to investigate their plasmid adsorption capacities. Physical characterisation of the monolithic polymer showed a macroporous polymer having a unimodal pore size distribution pivoted at 600 nm. Chromatographic characterisation of the functionalised polymers using pUC19 plasmid isolated from E. coli DH5α-pUC19 showed a maximum plasmid adsorption capacity of 18.73 mg pDNA/mL with a dissociation constant (KD) of 0.11 mg/mL for GMA-EGDMA/DEAE-Cl polymer. Studies on ligand leaching and degradation demonstrated the stability of GMA-EGDMA/DEAE-Cl after the functionalised polymers were contacted with 1.0 M NaOH, which is a model reagent for most 'cleaning in place' (CIP) systems. However, it is the economic advantage of an adsorbent material that makes it so attractive for commercial purification purposes. Economic evaluation of the performance of the functionalised polymers on the grounds of polymer cost (PC)/mg pDNA retained endorsed the suitability of GMA-EGDMA/DEAE-Cl polymer.
Resumo:
The creation of a commercially viable and a large-scale purification process for plasmid DNA (pDNA) production requires a whole-systems continuous or semi-continuous purification strategy employing optimised stationary adsorption phase(s) without the use of expensive and toxic chemicals, avian/bovine-derived enzymes and several built-in unit processes, thus affecting overall plasmid recovery, processing time and economics. Continuous stationary phases are known to offer fast separation due to their large pore diameter making large molecule pDNA easily accessible with limited mass transfer resistance even at high flow rates. A monolithic stationary sorbent was synthesised via free radical liquid porogenic polymerisation of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with surface and pore characteristics tailored specifically for plasmid binding, retention and elution. The polymer was functionalised with an amine active group for anion-exchange purification of pDNA from cleared lysate obtained from E. coli DH5α-pUC19 pellets in RNase/protease-free process. Characterization of the resin showed a unique porous material with 70% of the pores sizes above 300 nm. The final product isolated from anion-exchange purification in only 5 min was pure and homogenous supercoiled pDNA with no gDNA, RNA and protein contamination as confirmed with DNA electrophoresis, restriction analysis and SDS page. The resin showed a maximum binding capacity of 15.2 mg/mL and this capacity persisted after several applications of the resin. This technique is cGMP compatible and commercially viable for rapid isolation of pDNA.
Resumo:
Intelligent Transport Systems (ITS) have the potential to substantially reduce the number of crashes caused by human errors at railway levels crossings. Such systems, however, will only exert an influence on driving behaviour if they are accepted by the driver. This study aimed at assessing driver acceptance of different ITS interventions designed to enhance driver behaviour at railway crossings. Fifty eight participants, divided into three groups, took part in a driving simulator study in which three ITS devices were tested: an in-vehicle visual ITS, an in-vehicle audio ITS, and an on-road valet system. Driver acceptance of each ITS intervention was assessed in a questionnaire guided by the Technology Acceptance Model and the Theory of Planned Behaviour. Overall, results indicated that the strongest intentions to use the ITS devices belonged to participants exposed to the road-based valet system at passive crossings. The utility of both models in explaining drivers’ intention to use the systems is discussed, with results showing greater support for the Theory of Planned Behaviour. Directions for future studies, along with strategies that target attitudes and subjective norms to increase drivers’ behavioural intentions, are also discussed.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
Adjustable speed induction generators, especially the Doubly-Fed Induction Generators (DFIG) are becoming increasingly popular due to its various advantages over fixed speed generator systems. A DFIG in a wind turbine has ability to generate maximum power with varying rotational speed, ability to control active and reactive by integration of electronic power converters such as the back-to-back converter, low rotor power rating resulting in low cost converter components, etc, DFIG have become very popular in large wind power conversion systems. This chapter presents an extensive literature survey over the past 25 years on the different aspects of DFIG. Application of H8 Controller for enhanced DFIG-WT performance in terms of robust stability and reference tracking to reduce mechanical stress and vibrations is also demonstrated in the chapter.
Resumo:
This chapter focuses on the implementation of the TS (Tagaki-Sugino) fuzzy controller for the Doubly Fed Induction Generator (DFIG) based wind generator. The conventional PI control loops for mantaining desired active power and DC capacitor voltage is compared with the TS fuzzy controllers. DFIG system is represented by a third-order model where electromagnetic transients of the stator are neglected. The effectiveness of the TS-fuzzy controller on the rotor speed oscillations and the DC capacitor voltage variations of the DFIG damping controller on converter ratings is also investigated. The results from the time domain simulations are presented to elucidate the effectiveness of the TS-fuzzy controller over the conventional PI controller in the DFIG system. The proposed TS-fuzzy con-troller can improve the fault ride through capability of DFIG compared to the conventional PI controller.
Resumo:
Despite tough economic times, the uptake of photovoltaic (PV) technology has seen tremendous growth over the past decade. More than 21 GW of rooftop PV systems were installed globally in the year 2012 alone. This is fueled by various incentives offered by policy makers around the world with a goal of enhancing renewable energy integration and reducing the dependence on fossil fuels. For instance, the goal of achieving 20% energy consumption from renewable resources by 2020 has been unanimously accepted by numerous countries in Europe, North America, and Australia. Uptake of PVs by residential and small businesses has been augmented by generous rebates offered by government on installations and on the amount of energy injected into the grid. Furthermore, the global market outlook report published by EPIA predicts that the rooftop PV installations will continue to grow for the foreseeable future.
Resumo:
User profiling is the process of constructing user models which represent personal characteristics and preferences of customers. User profiles play a central role in many recommender systems. Recommender systems recommend items to users based on user profiles, in which the items can be any objects which the users are interested in, such as documents, web pages, books, movies, etc. In recent years, multidimensional data are getting more and more attention for creating better recommender systems from both academia and industry. Additional metadata provides algorithms with more details for better understanding the interactions between users and items. However, most of the existing user/item profiling techniques for multidimensional data analyze data through splitting the multidimensional relations, which causes information loss of the multidimensionality. In this paper, we propose a user profiling approach using a tensor reduction algorithm, which we will show is based on a Tucker2 model. The proposed profiling approach incorporates latent interactions between all dimensions into user profiles, which significantly benefits the quality of neighborhood formation. We further propose to integrate the profiling approach into neighborhoodbased collaborative filtering recommender algorithms. Experimental results show significant improvements in terms of recommendation accuracy.
Resumo:
Business Process Management describes a holistic management approach for the systematic design, modeling, execution, validation, monitoring and improvement of organizational business processes. Traditionally, most attention within this community has been given to control-flow aspects, i.e., the ordering and sequencing of business activities, oftentimes in isolation with regards to the context in which these activities occur. In this paper, we propose an approach that allows executable process models to be integrated with Geographic Information Systems. This approach enables process models to take geospatial and other geographic aspects into account in an explicit manner both during the modeling phase and the execution phase. We contribute a structured modeling methodology, based on the well-known Business Process Model and Notation standard, which is formalized by means of a mapping to executable Colored Petri nets. We illustrate the feasibility of our approach by means of a sustainability-focused case example of a process with important ecological concerns.
Resumo:
Focus groups are a popular qualitative research method for information systems researchers. However, compared with the abundance of research articles and handbooks on planning and conducting focus groups, surprisingly, there is little research on how to analyse focus group data. Moreover, those few articles that specifically address focus group analysis are all in fields other than information systems, and offer little specific guidance for information systems researchers. Further, even the studies that exist in other fields do not provide a systematic and integrated procedure to analyse both focus group ‘content’ and ‘interaction’ data. As the focus group is a valuable method to answer the research questions of many IS studies (in the business, government and society contexts), we believe that more attention should be paid to this method in the IS research. This paper offers a systematic and integrated procedure for qualitative focus group data analysis in information systems research.
Resumo:
This project examined the potential for historical mapping of land resources to be upgraded to meet current requirements for natural resource management. The methods of spatial disaggregation used to improve the scale of mapping were novel and provide a method to rapidly improve existing information. The thesis investigated the potential to use digital soil mapping techniques and the multi-scale identification of areas within historical land systems mapping to provide enhanced information to support modern natural resource management needs. This was undertaken in the Burnett Catchment of South-East Queensland.
Resumo:
This thesis introduces a method of applying Bayesian Networks to combine information from a range of data sources for effective decision support systems. It develops a set of techniques in development, validation, visualisation, and application of Complex Systems models, with a working demonstration in an Australian airport environment. The methods presented here have provided a modelling approach that produces highly flexible, informative and applicable interpretations of a system's behaviour under uncertain conditions. These end-to-end techniques are applied to the development of model based dashboards to support operators and decision makers in the multi-stakeholder airport environment. They provide highly flexible and informative interpretations and confidence in these interpretations of a system's behaviour under uncertain conditions.
Resumo:
By referring to Niklas Luhmann's theory of self-referential systems, Aldo Mascareño (2008, submitted for publication) gives an account of system-environment interrelatedness, explaining how social and individual constitute each other through the process of communication and co-creation of meanings. Two possible extensions to his account are discussed. Firstly, auto-communication within the system that happens without any external reference needs to be taken into account while describing the existence and constant re-creation of psychic systems. Secondly, in order for the system and environment or two systems to communicate, an imagined and temporary intersubjectivity between the two needs to be assumed.