937 resultados para Design and Formative Studies of AIED Systems
Resumo:
Tuberculosis is one of the most devastating diseases in the world primarily due to several decades of neglect and an emergence of multidrug-resitance strains (MDR) of M. tuberculosis together with the increased incidence of disseminated infections produced by other mycobacterium in AIDS patients. This has prompted the search for new antimycobacterial drugs. A series of pyridine-2-, pyridine-3-, pyridine-4-, pyrazine and quinoline-2-carboxamidrazone derivatives and new classes of carboxamidrazone were prepared in an automated fashion and by traditional synthesis. Over nine hundred synthesized compounds were screened for their anti mycobacterial activity against M. fortutium (NGTG 10394) as a surrogate for M. tuberculosis. The new classes of amidrazones were also screened against tuberculosis H37 Rv and antimicrobial activities against various bacteria. Fifteen tested compounds were found to provide 90-100% inhibition of mycobacterium growth of M. tuberculosis H37 Rv in the primary screen at 6.25 μg mL-1. The most active compound in the carboxamidrazone amide series had an MIG value of 0.1-2 μg mL-1 against M. fortutium. The enzyme dihydrofolate reductase (DHFR) has been a drug-design target for decades. Blocking of the enzymatic activity of DHFR is a key element in the treatment of many diseases, including cancer, bacterial and protozoal infection. The x-ray structure of DHFR from M. tuberculosis and human DHFR were found to have differences in substrate binding site. The presence of glycerol molecule in the Xray structure from M. tuberculosis DHFR provided opportunity to design new antifolates. The new antifolates described herein were designed to retain the pharmcophore of pyrimethamine (2,4- diamino-5(4-chlorophenyl)-6-ethylpyrimidine), but encompassing a range of polar groups that might interact with the M. tuberculosis DHFR glycerol binding pockets. Finally, the research described in this thesis contributes to the preparation of molecularly imprinted polymers for the recognition of 2,4-diaminopyrimidine for the binding the target. The formation of hydrogen bonding between the model functional monomer 5-(4-tert-butyl-benzylidene)-pyrimidine-2,4,6-trione and 2,4-diaminopyrimidine in the pre-polymerisation stage was verified by 1H-NMR studies. Having proven that 2,4-diaminopyrimidine interacts strongly with the model 5-(4-tert-butylbenzylidene)- pyrimidine-2,4,6-trione, 2,4-diaminopyrimidine-imprinted polymers were prepared using a novel cyclobarbital derived functional monomer, acrylic acid 4-(2,4,6-trioxo-tetrahydro-pyrimidin-5- ylidenemethyl)phenyl ester, capable of multiple hydrogen bond formation with the 2,4- diaminopyrimidine. The recognition property of the respective polymers toward the template and other test compounds was evaluated by fluorescence. The results demonstrate that the polymers showed dose dependent enhancement of fluorescence emissions. In addition, the results also indicate that synthesized MIPs have higher 2,4-diaminopyrimidine binding ability as compared with corresponding non-imprinting polymers.
Resumo:
The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
Communication and portability are the two main problems facing the user. An operating system, called PORTOS, was developed to solve these problems for users on dedicated microcomputer systems. Firstly, an interface language was defined, according to the anticipated requirements and behaviour of its potential users. Secondly, the PORTOS operating system was developed as a processor for this language. The system is currently running on two minicomputers of highly different architectures. PORTOS achieves its portability through its high-level design, and implementation in CORAL66. The interface language consists of a set of user cotnmands and system responses. Although only a subset has been implemented, owing to time and manpower constraints, promising results were achieved regarding the usability of the language, and its portability.
Resumo:
Manufacturing firms are driven by competitive pressures to continually improve the effectiveness and efficiency of their organisations. For this reason, manufacturing engineers often implement changes to existing processes, or design new production facilities, with the expectation of making further gains in manufacturing system performance. This thesis relates to how the likely outcome of this type of decision should be predicted prior to its implementation. The thesis argues that since manufacturing systems must also interact with many other parts of an organisation, the expected performance improvements can often be significantly hampered by constraints that arise elsewhere in the business. As a result, decision-makers should attempt to predict just how well a proposed design will perform when these other factors, or 'support departments', are taken into consideration. However, the thesis also demonstrates that, in practice, where quantitative analysis is used to evaluate design decisions, the analysis model invariably ignores the potential impact of support functions on a system's overall performance. A more comprehensive modelling approach is therefore required. A study of how various business functions interact establishes that to properly represent the kind of delays that give rise to support department constraints, a model should actually portray the dynamic and stochastic behaviour of entities in both the manufacturing and non-manufacturing aspects of a business. This implies that computer simulation be used to model design decisions but current simulation software does not provide a sufficient range of functionality to enable the behaviour of all of these entities to be represented in this way. The main objective of the research has therefore been the development of a new simulator that will overcome limitations of existing software and so enable decision-makers to conduct a more holistic evaluation of design decisions. It is argued that the application of object-oriented techniques offers a potentially better way of fulfilling both the functional and ease-of-use issues relating to development of the new simulator. An object-oriented analysis and design of the system, called WBS/Office, are therefore presented that extends to modelling a firm's administrative and other support activities in the context of the manufacturing system design process. A particularly novel feature of the design is the ability for decision-makers to model how a firm's specific information and document processing requirements might hamper shop-floor performance. The simulator is primarily intended for modelling make-to-order batch manufacturing systems and the thesis presents example models created using a working version of WBS/Office that demonstrate the feasibility of using the system to analyse manufacturing system designs in this way.
Resumo:
This thesis describes work done exploring the application of expert system techniques to the domain of designing durable concrete. The nature of concrete durability design is described and some problems from the domain are discussed. Some related work on expert systems in concrete durability are described. Various implementation languages are considered - PROLOG and OPS5, and rejected in favour of a shell - CRYSTAL3 (later CRYSTAL4). Criteria for useful expert system shells in the domain are discussed. CRYSTAL4 is evaluated in the light of these criteria. Modules in various sub-domains (mix-design, sulphate attack, steel-corrosion and alkali aggregate reaction) are developed and organised under a BLACKBOARD system (called DEX). Extensions to the CRYSTAL4 modules are considered for different knowledge representations. These include LOTUS123 spreadsheets implementing models incorporating some of the mathematical knowledge in the domain. Design databases are used to represent tabular design knowledge. Hypertext representations of the original building standards texts are proposed as a tool for providing a well structured and extensive justification/help facility. A standardised approach to module development is proposed using hypertext development as a structured basis for expert systems development. Some areas of deficient domain knowledge are highlighted particularly in the use of data from mathematical models and in gaps and inconsistencies in the original knowledge source Digests.
Resumo:
It is known that parallel pathways exist within the visual system. These have been described as magnocellular and parvocellular as a result of the layered organisation of the lateral geniculate nucleus and extend from the retina to the cortex. Dopamine (DA) and acetylcholine (ACH) are neurotransmitters that are present in the visual pathway. DA is present in the retina and is associated with the interplexiform cells and horizontal cells. ACH is also present in the retina and is associated with displaced amacrine cells; it is also present in the superior colliculus. DA is found to be significantly depleted in the brain of Parkinson's disease (PD) patients and ACH in Alzheimer's disease (AD) patients. For this reason these diseases were used to assess the function of DA and ACH in the electrophysiology of the visual pathway. Experiments were conducted on young normals to design stimuli that would preferentially activate the magnocellular or parvocellular pathway. These stimuli were then used to evoke visual evoked potentials (VEP) in patients with PD and AD, in order to assess the function of DA and ACH in the visual pathway. Electroretinograms (ERGs) were also measured in PD patients to assess the role of DA in the retina. In addition, peripheral ACH function was assessed by measuring VEPs, ERGs and contrast sensitivity (CS) in young normals following the topical instillation of hyoscine hydrobromide (an anticholinergic drug). The results indicate that the magnocellular pathway can be divided into two: a cholinergic tectal-association area pathway carrying luminance information, and a non-cholinergic geniculo-cortical pathway carrying spatial information. It was also found that depletion of DA had very little effect on the VEPs or ERGs, confirming a general regulatory function for this neurotransmitter.
Resumo:
Drying is an important unit operation in process industry. Results have suggested that the energy used for drying has increased from 12% in 1978 to 18% of the total energy used in 1990. A literature survey of previous studies regarding overall drying energy consumption has demonstrated that there is little continuity of methods and energy trends could not be established. In the ceramics, timber and paper industrial sectors specific energy consumption and energy trends have been investigated by auditing drying equipment. Ceramic products examined have included tableware, tiles, sanitaryware, electrical ceramics, plasterboard, refractories, bricks and abrasives. Data from industry has shown that drying energy has not varied significantly in the ceramics sector over the last decade, representing about 31% of the total energy consumed. Information from the timber industry has established that radical changes have occurred over the last 20 years, both in terms of equipment and energy utilisation. The energy efficiency of hardwood drying has improved by 15% since the 1970s, although no significant savings have been realised for softwood. A survey estimating the energy efficiency and operating characteristics of 192 paper dryer sections has been conducted. Drying energy was found to increase to nearly 60% of the total energy used in the early 1980s, but has fallen over the last decade, representing 23% of the total in 1993. These results have demonstrated that effective energy saving measures, such as improved pressing and heat recovery, have been successfully implemented since the 1970s. Artificial neural networks have successfully been applied to model process characteristics of microwave and convective drying of paper coated gypsum cove. Parameters modelled have included product moisture loss, core gypsum temperature and quality factors relating to paper burning and bubbling defects. Evaluation of thermal and dielectric properties have highlighted gypsum's heat sensitive characteristics in convective and electromagnetic regimes. Modelling experimental data has shown that the networks were capable of simulating drying process characteristics to a high degree of accuracy. Product weight and temperature were predicted to within 0.5% and 5C of the target data respectively. Furthermore, it was demonstrated that the underlying properties of the data could be predicted through a high level of input noise.
Resumo:
Results of a pioneering study are presented in which for the first time, crystallization, phase separation and Marangoni instabilities occurring during the spin-coating of polymer blends are directly visualized, in real-space and real-time. The results provide exciting new insights into the process of self-assembly, taking place during spin-coating, paving the way for the rational design of processing conditions, to allow desired morphologies to be obtained. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Future optical networks will require the implementation of very high capacity (and therefore spectral efficient) technologies. Multi-carrier systems, such as Orthogonal Frequency Division Multiplexing (OFDM) and Coherent WDM (CoWDM), are promising candidates. In this paper, we present analytical, numerical, and experimental investigations of the impact of the relative phases between optical subcarriers of CoWDM systems, as well as the effect that the number of independently modulated subcarriers can have on the performance. We numerically demonstrate a five-subcarrier and three-subcarrier 10-GBd CoWDM system with direct detected amplitude shift keying (ASK) and differentially/coherently detected (D) phase shift keying (PSK). The simulation results are compared with experimental measurements of a 32-Gbit/s DPSK CoWDM system in two configurations. The first configuration was a practical 3-modulator array where all three subcarriers were independently modulated, the second configuration being a traditional 2-modulator odd/even configuration, where only odd and even subcarriers were independently modulated. Simulation and experimental results both indicate that the independent modulation implementation has a greater dependency on the relative phases between subcarriers, with a stronger penalty for the center subcarrier than the odd/even modulation scheme.
Resumo:
Objectives: Are behavioural interventions effective in reducing the rate of sexually transmitted infections (STIs) among genitourinary medicine (GUM) clinic patients? Design: Systematic review and meta-analysis of published articles. Data sources: Medline, CINAHL, Embase, PsychINFO, Applied Social Sciences Index and Abstracts, Cochrane Library Controlled Clinical Trials Register, National Research Register (1966 to January 2004). Review methods: Randomised controlled trials of behavioural interventions in sexual health clinic patients were included if they reported change to STI rates or self reported sexual behaviour. Trial quality was assessed using the Jadad score and results pooled using random effects meta-analyses where outcomes were consistent across studies. Results: 14 trials were included; 12 based in the United States. Experimental interventions were heterogeneous and most control interventions were more structured than typical UK care. Eight trials reported data on laboratory confirmed infections, of which four observed a greater reduction in their intervention groups (in two cases this result was statistically significant, p<0.05). Seven trials reported consistent condom use, of which six observed a greater increase among their intervention subjects. Results for other measures of sexual behaviour were inconsistent. Success in reducing STIs was related to trial quality, use of social cognition models, and formative research in the target population. However, effectiveness was not related to intervention format or length. Conclusions: While results were heterogeneous, several trials observed reductions in STI rates. The most effective interventions were developed through extensive formative research. These findings should encourage further research in the United Kingdom where new approaches to preventing STIs are urgently required.
Resumo:
Lipidome profile of fluids and tissues is a growing field as the role of lipids as signaling molecules is increasingly understood, relying on an effective and representative extraction of the lipids present. A number of solvent systems suitable for lipid extraction are commonly in use, though no comprehensive investigation of their effectiveness across multiple lipid classes has been carried out. To address this, human LDL from normolipidemic volunteers was used to evaluate five different solvent extraction protocols [Folch, Bligh and Dyer, acidified Bligh and Dyer, methanol (MeOH)-tert-butyl methyl ether (TBME), and hexane-isopropanol] and the extracted lipids were analyzed by LC-MS in a high-resolution instrument equipped with polarity switching. Overall, more than 350 different lipid species from 19 lipid subclasses were identified. Solvent composition had a small effect on the extraction of predominant lipid classes (triacylglycerides, cholesterol esters, and phosphatidylcholines). In contrast, extraction of less abundant lipids (phosphatidylinositols, lyso-lipids, ceramides, and cholesterol sulfates) was greatly influenced by the solvent system used. Overall, the Folch method was most effective for the extraction of a broad range of lipid classes in LDL, although the hexane-isopropanol method was best for apolar lipids and the MeOH-TBME method was suitable for lactosyl ceramides. Copyright © 2013 by the American Society for Biochemistry and Molecular Biology, Inc.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.
Resumo:
Small indigenous manufacturers of electronic equipment are coming under increasingly severe pressure to adopt a strong defensive position against large multinational and Far Eastern companies. A common response to this threat has been for these firms to adopt a 'market driven' business strategy based on quality and customer service, rather than a 'technology led' strategy which uses technical specification and price to compete. To successfully implement this type of strategy there is a need for production systems to be redesigned to suit the new demands of marketing. Increased range and fast response require economy of scope rather t ban economy or scale while the organisation's culture must promote quality and process consciousness. This paper describes the 'Modular Assembly Cascade' concept which addresses these needs by applying the principles of flexible manufacturing (FMS) and just in time (,JlT) to electronics assembly. A methodology for executing the concept is also outlined. This is called DRAMA (Design Houtirw !'or· Adopting Modular Assembly).
Resumo:
A new bridge technique for the measurement of the dielectric absorption of liquids and solutions at microwave frequencies has been described and its accuracy assessed. 'l'he dielectric data of the systems studied is discussed in terms of the relaxation processes contributing to the dielectric absorption and the apparent dipole moments. Pyridine, thiophen and furan in solution have a distribution of relaxation times which may be attributed to the small size of the solute molecules relative to the solvent. Larger rigid molecules in solution were characterized by a single relaxation time as would be anticipated from theory. The dielectric data of toluene, ethyl-, isopropyl- and t-butylbenzene as pure liquids and in solution were described by two relaxation times, one identified with molecular re-orientation and a shorter relaxation time.· The subsequent work was investigation of the possible explanations of this short relaxation process. Comparable short relaxation times were obtained from the analysis of the dielectric data of solutions of p-chloro- and p-bromotoluene below 40°C, o- and m-xylene at 25°C and 1-methyl- and 2 methylnaphthalene at 50 C. Rigid molecules of similar shapes and sizes were characterized by a single relaxation time identified with molecular re-orientation. Contributions from a long relaxation process attributed to dipolar origins were reported for solutions of nitrobenzene, benzonitrile and p-nitrotoluene. A short relaxation process of possible dipolar origins contributed to the dielectric absorption of 4-methyl- and 4-t-butylpyridine in cyclohexane at 25°C. It was concluded that the most plausible explanation of the short relaxation process of the alkyl-aryl hydrocarbons studied appears to be intramolecular relaxation about the alkyl-aryl bond. Finally the mean relaxation times of some phenylsubstituted compounds were investigated to evaluate any shortening due to contributions from the process of relaxation about the phenyl-central atom bond. The relaxation times of triphenylsilane and phenyltrimethylsilane were significantly short.