11 resultados para Traditional enrichment method
em Aston University Research Archive
Resumo:
Today, speciality use organoclays are being developed for an increasingly large number of specific applications. Many of these, including use in cosmetics, polishes, greases and paints, require that the material be free from abrasive impurities so that the product retains a smooth `feel'. The traditional `wet' method preparation of organoclays inherently removes abrasives naturally present in the parent mineral clay, but it is time-consuming and expensive. The primary objective of this thesis was to explore the alternative `dry' method (which is both quicker and cheaper but which provides no refining of the parent clay) as a process, and to examine the nature of the organoclays produced, for the production of a wide range of commercially usable organophilic clays in a facile way. Natural Wyoming bentonite contains two quite different types of silicate surface (that of the clay mineral montmorillonite and that of a quartz impurity) that may interact with the cationic surfactant added in the `dry' process production of organoclays. However, it is oil shale, and not the quartz, that is chiefly responsible for the abrasive nature of the material, although air refinement in combination with the controlled milling of the bentonite as a pretreatment may offer a route to its removal. Ion exchange of Wyoming bentonite with a long chain quaternary ammonium salt using the `dry' process affords a partially exchanged, 69-78%, organoclay, with a monolayer formation of ammonium ions in the interlayer. Excess ion pairs are sorbed on the silicate surfaces of both the clay mineral and the quartz impurity phases. Such surface sorption is enhanced by the presence of very finely divided, super paramagnetic, Fe2O3 or Fe(O)(OH) contaminating the surfaces of the major mineral components. The sorbed material is labile to washing, and induces a measurable shielding of the 29Si nuclei in both clay and quartz phases in the MAS NMR experiment, due to an anisotropic magnetic susceptibility effect. XRD data for humidified samples reveal the interlamellar regions to be strongly hydrophobic, with the by-product sodium chloride being expelled to the external surfaces. Many organic cations will exchange onto a clay. The tetracationic cyclophane, and multipurpose receptor, cyclobis(paraquat-p-phenylene) undergoes ion exchange onto Wyoming bentonite to form a pillared clay with a very regular gallery height. The major plane of the cyclophane is normal to the silicate surfaces, thus allowing the cavity to remain available for complexation. A series of group VI substituted o-dimethoxybenzenes were introduced, and shown to participate in host/guest interactions with the cyclophane. Evidence is given which suggests that the binding of the host structure to a clay substrate offers advantages, not only of transportability and usability but of stability, to the charge-transfer complex which may prove useful in a variety of commercial applications. The fundamental relationship between particle size, cation exchange capacity and chemical composition of clays was also examined. For Wyoming bentonite the extent of isomorphous substitution increases with decreasing particle size, causing the CEC to similarly increase, although the isomorphous substitution site: edge site ratio remains invarient throughout the particle size range studied.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
The first clinically proven nicotine replacement product to obtain regulatory approval was Nicorette® gum. It provides a convenient way of delivering nicotine directly to the buccal cavity, thus, circumventing 'first-pass' elimination following gastrointestinal absorption. Since launch, Nicorette® gum has been investigated in numerous studies (clinical) which are often difficult to compare due to large variations in study design and degree of sophistication. In order to standardise testing, in 2000 the European Pharmacopoeia introduced an apparatus to investigate the in vitro release of drug substances from medical chewing gum. With use of the chewing machine, the main aims of this project were to determine factors that could affect release from Nicorette® gum, to develop an in vitro in vivo correlation and to investigate formulation variables on release of nicotine from gums. A standard in vitro test method was developed. The gum was placed in the chewing chamber with 40 mL of artificial saliva at 37'C and chewed at 60 chews per minute. The chew rate, the type of dissolution medium used, pH, volume, temperature and the ionic strength of the dissolution medium were altered to investigate the effects on release in vitro. It was found that increasing the temperature of the dissolution media and the rate at which the gums were chewed resulted in a greater release of nicotine, whilst increasing the ionic strength of the dissolution medium to 80 mM resulted in a lower release. The addition of 0.1 % sodium Jauryl sulphate to the artificial saliva was found to double the release of nicotine compared to the use of artificial saliva and water alone. Although altering the dissolution volume and the starting pH did not affect the release. The increase in pH may be insufficient to provide optimal conditions for nicotine absorption (since the rate at which nicotine is transported through the buccal membrane was found to be higher at pH values greater than 8.6 where nicotine is predominately unionised). Using a time mapping function, it was also possible to establish a level A in vitro in vivo correlation. 4 mg Nicorette® gum was chewed at various chew rates in vitro and correlated to an in vivo chew-out study. All chew rates used in vitro could be successfully used for IVIVC purposes, however statistically, chew rates of 10 and 20 chews per minute performed better than all other chew rates. Finally a series of nicotine gums was made to investigate the effect of formulation variables on release of nicotine from the gum. Using a directly compressible gum base, in comparison to Nicorette® the gums crumbled when chewed in vitro, resulting in a faster release of nicotine. To investigate the effect of altering the gum base, the concentration of sodium salts, sugar syrup, the form of the active drug, the addition sequence and the incorporation of surfactant into the gum, the traditional manufacturing method was used to make a series of gum formulations. Results showed that the time of addition of the active drug, the incorporation of surfactants and using different gum base all increased the release of nicotine from the gum. In contrast, reducing the concentration of sodium carbonate resulted in a lower release. Using a stronger nicotine ion-exchange resin delayed the release of nicotine from the gum, whilst altering the concentration of sugar syrup had little effect on the release but altered the texture of the gum.
Resumo:
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT • The cytotoxic effects of 6-mercaptopurine (6-MP) were found to be due to drug-derived intracellular metabolites (mainly 6-thioguanine nucleotides and to some extent 6-methylmercaptopurine nucleotides) rather than the drug itself. • Current empirical dosing methods for oral 6-MP result in highly variable drug and metabolite concentrations and hence variability in treatment outcome. WHAT THIS STUDY ADDS • The first population pharmacokinetic model has been developed for 6-MP active metabolites in paediatric patients with acute lymphoblastic leukaemia and the potential demographic and genetically controlled factors that could lead to interpatient pharmacokinetic variability among this population have been assessed. • The model shows a large reduction in interindividual variability of pharmacokinetic parameters when body surface area and thiopurine methyltransferase polymorphism are incorporated into the model as covariates. • The developed model offers a more rational dosing approach for 6-MP than the traditional empirical method (based on body surface area) through combining it with pharmacogenetically guided dosing based on thiopurine methyltransferase genotype. AIMS - To investigate the population pharmacokinetics of 6-mercaptopurine (6-MP) active metabolites in paediatric patients with acute lymphoblastic leukaemia (ALL) and examine the effects of various genetic polymorphisms on the disposition of these metabolites. METHODS - Data were collected prospectively from 19 paediatric patients with ALL (n = 75 samples, 150 concentrations) who received 6-MP maintenance chemotherapy (titrated to a target dose of 75 mg m−2 day−1). All patients were genotyped for polymorphisms in three enzymes involved in 6-MP metabolism. Population pharmacokinetic analysis was performed with the nonlinear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance for the active metabolites. RESULTS - The developed model revealed considerable interindividual variability (IIV) in the clearance of 6-MP active metabolites [6-thioguanine nucleotides (6-TGNs) and 6-methylmercaptopurine nucleotides (6-mMPNs)]. Body surface area explained a significant part of 6-TGNs clearance IIV when incorporated in the model (IIV reduced from 69.9 to 29.3%). The most influential covariate examined, however, was thiopurine methyltransferase (TPMT) genotype, which resulted in the greatest reduction in the model's objective function (P < 0.005) when incorporated as a covariate affecting the fractional metabolic transformation of 6-MP into 6-TGNs. The other genetic covariates tested were not statistically significant and therefore were not included in the final model. CONCLUSIONS - The developed pharmacokinetic model (if successful at external validation) would offer a more rational dosing approach for 6-MP than the traditional empirical method since it combines the current practice of using body surface area in 6-MP dosing with a pharmacogenetically guided dosing based on TPMT genotype.
Resumo:
BACKGROUND: Contrast detection is an important aspect of the assessment of visual function; however, clinical tests evaluate limited spatial frequencies and contrasts. This study validates the accuracy and inter-test repeatability of a swept-frequency near and distance mobile app Aston contrast sensitivity test, which overcomes this limitation compared to traditional charts. METHOD: Twenty subjects wearing their full refractive correction underwent contrast sensitivity testing on the new near application (near app), distance app, CSV-1000 and Pelli-Robson charts with full correction and with vision degraded by 0.8 and 0.2 Bangerter degradation foils. In addition repeated measures using the 0.8 occluding foil were taken. RESULTS: The mobile apps (near more than distance, p = 0.005) recorded a higher contrast sensitivity than printed tests (p < 0.001); however, all charts showed a reduction in measured contrast sensitivity with degradation (p < 0.001) and a similar decrease with increasing spatial frequency (interaction > 0.05). Although the coefficient of repeatability was lowest for the Pelli-Robson charts (0.14 log units), the mobile app charts measured more spatial frequencies, took less time and were more repeatable (near: 0.26 to 0.37 log units; distance: 0.34 to 0.39 log units) than the CSV-1000 (0.30 to 0.93 log units). The duration to complete the CSV-1000 was 124 ± 37 seconds, Pelli-Robson 78 ± 27 seconds, near app 53 ± 15 seconds and distance app 107 ± 36 seconds. CONCLUSIONS: While there were differences between charts in contrast levels measured, the new Aston near and distance apps are valid, repeatable and time-efficient method of assessing contrast sensitivity at multiple spatial frequencies.
Resumo:
Saturation mutagenesis is a powerful tool in modern protein engineering, which permits key residues within a protein to be targeted in order to potentially enhance specific functionalities. However, the creation of large libraries using conventional saturation mutagenesis with degenerate codons (NNN or NNK/S) has inherent redundancy and consequent disparities in codon representation. Therefore, both chemical (trinucleotide phosphoramidites) and biological methods (sequential, enzymatic single codon additions) of non-degenerate saturation mutagenesis have been developed in order to combat these issues and so improve library quality. Large libraries with multiple saturated positions can be limited by the method used to screen them. Although the traditional screening method of choice, cell-dependent methods, such as phage display, are limited by the need for transformation. A number of cell-free screening methods, such as CIS display, which link the screened phenotype with the encoded genotype, have the capability of screening libraries with up to 1014 members. This thesis describes the further development of ProxiMAX technology to reduce library codon bias and its integration with CIS display to screen the resulting library. Synthetic MAX oligonucleotides are ligated to an acceptor base sequence, amplified, and digested, subsequently adding a randomised codon to the acceptor, which forms an iterative cycle using the digested product of the previous cycle as the base sequence for the next. Initial use of ProxiMAX highlighted areas of the process where changes could be implemented in order to improve the codon representation in the final library. The refined process was used to construct a monomeric anti-NGF peptide library, based on two proprietary dimeric peptides (Isogenica) that bind NGF. The resulting library showed greatly improved codon representation that equated to a theoretical diversity of ~69%. The library was subsequently screened using CIS display and the discovered peptides assessed for NGF-TrkA inhibition by ELISA. Despite binding to TrkA, these peptides showed lower levels of inhibition of the NGF-TrkA interaction than the parental dimeric peptides, highlighting the importance of dimerization for inhibition of NGF-TrkA binding.
Resumo:
Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Synchronous reluctance motors (SynRMs) are gaining in popularity in industrial drives due to their permanent magnet-free, competitive performance, and robust features. This paper studies the power losses in a 90-kW converter-fed SynRM drive by a calorimetric method in comparison of the traditional input-output method. After the converter and the motor were measured simultaneously in separate chambers, the converter was installed inside the large-size chamber next to the motor and the total drive system losses were obtained using one chamber. The uncertainty of both measurement methods is analyzed and discussed.
Resumo:
Introduction-The design of the UK MPharm curriculum is driven by the Royal Pharmaceutical Society of Great Britain (RPSGB) accreditation process and the EU directive (85/432/EEC).[1] Although the RPSGB is informed about teaching activity in UK Schools of Pharmacy (SOPs), there is no database which aggregates information to provide the whole picture of pharmacy education within the UK. The aim of the teaching, learning and assessment study [2] was to document and map current programmes in the 16 established SOPs. Recent developments in programme delivery have resulted in a focus on deep learning (for example, through problem based learning approaches) and on being more student centred and less didactic through lectures. The specific objectives of this part of the study were (a) to quantify the content and modes of delivery of material as described in course documentation and (b) having categorised the range of teaching methods, ask students to rate how important they perceived each one for their own learning (using a three point Likert scale: very important, fairly important or not important). Material and methods-The study design compared three datasets: (1) quantitative course document review, (2) qualitative staff interview and (3) quantitative student self completion survey. All 16 SOPs provided a set of their undergraduate course documentation for the year 2003/4. The documentation variables were entered into Excel tables. A self-completion questionnaire was administered to all year four undergraduates, using a pragmatic mixture of methods, (n=1847) in 15 SOPs within Great Britain. The survey data were analysed (n=741) using SPSS, excluding non-UK students who may have undertaken part of their studies within a non-UK university. Results and discussion-Interviews showed that individual teachers and course module leaders determine the choice of teaching methods used. Content review of the documentary evidence showed that 51% of the taught element of the course was delivered using lectures, 31% using practicals (includes computer aided learning) and 18% small group or interactive teaching. There was high uniformity across the schools for the first three years; variation in the final year was due to the project. The average number of hours per year across 15 schools (data for one school were not available) was: year 1: 408 hours; year 2: 401 hours; year 3: 387 hours; year 4: 401 hours. The survey showed that students perceived lectures to be the most important method of teaching after dispensing or clinical practicals. Taking the very important rating only: 94% (n=694) dispensing or clinical practicals; 75% (n=558) lectures; 52% (n=386) workshops, 50% (n=369) tutorials, 43% (n=318) directed study. Scientific laboratory practices were rated very important by only 31% (n=227). The study shows that teaching of pharmacy to undergraduates in the UK is still essentially didactic through a high proportion of formal lectures and with high levels of staff-student contact. Schools consider lectures still to be the most cost effective means of delivering the core syllabus to large cohorts of students. However, this does limit the scope for any optionality within teaching, the scope for small group work is reduced as is the opportunity to develop multi-professional learning or practice placements. Although novel teaching and learning techniques such as e-learning have expanded considerably over the past decade, schools of pharmacy have concentrated on lectures as the best way of coping with the huge expansion in student numbers. References [1] Council Directive. Concerning the coordination of provisions laid down by law, regulation or administrative action in respect of certain activities in the field of pharmacy. Official Journal of the European Communities 1985;85/432/EEC. [2] Wilson K, Jesson J, Langley C, Clarke L, Hatfield K. MPharm Programmes: Where are we now? Report commissioned by the Pharmacy Practice Research Trust., 2005.
Resumo:
This research provides a novel approach for the determination of water content and higher heating value of pyrolysis oil. Pyrolysis oil from Napier grass was used in this study. Water content was determined with pH adjustment using a Karl Fischer titration unit. An equation for actual water in the oil was developed and used, and the results were compared with the traditional Karl Fischer method. The oil was found to have between 42 and 64% moisture under the same pyrolysis condition depending on the properties of the Napier grass prior to the pyrolysis. The higher heating value of the pyrolysis oil was determined using an oil-diesel mixture, and 20 to 25 wt% of the oil in the mixture gave optimum and stable results. A new model was developed for evaluation of higher heating value of dry pyrolysis oil. The dry oil has higher heating values in the range between 19 and 26 MJ/kg. The developed protocols and equations may serve as a reliable alternative means for establishing the actual water content and the higher heating value of pyrolysis oil.
Drying kinetic analysis of municipal solid waste using modified page model and pattern search method
Resumo:
This work studied the drying kinetics of the organic fractions of municipal solid waste (MSW) samples with different initial moisture contents and presented a new method for determination of drying kinetic parameters. A series of drying experiments at different temperatures were performed by using a thermogravimetric technique. Based on the modified Page drying model and the general pattern search method, a new drying kinetic method was developed using multiple isothermal drying curves simultaneously. The new method fitted the experimental data more accurately than the traditional method. Drying kinetic behaviors under extrapolated conditions were also predicted and validated. The new method indicated that the drying activation energies for the samples with initial moisture contents of 31.1 and 17.2 % on wet basis were 25.97 and 24.73 kJ mol−1. These results are useful for drying process simulation and industrial dryer design. This new method can be also applied to determine the drying parameters of other materials with high reliability.