904 resultados para Optimisation of methods
Resumo:
In this project, antigen-containing microspheres were produced using a range of biodegradable polymers by single and double emulsion solvent evaporation and spray drying techniques. The proteins used in this study were mainly BSA, tetanus toxoid, F1 and V, Y. pestis subunit vaccines and the cytokine, interferon-gamma. The polymer chosen for use in the vaccine preparation will directly determine the characteristics of the formulation. Full in vitro analysis of the preparations was carried out, including surface hydrophobicity and drug release profiles. The influence of the surfactants employed on microsphere surface hydrophobicity was demonstrated. Preparations produced with polyhydroxybutyrate and poly(DTH carbonate) polymers were also shown to be more hydrophobic than PLA microspheres, which may enhance particle uptake by antigen presenting cells and Peyer's patches. Systematic immunisation with microspheres with a range of properties showed differences in the time course and extent of the immune response generated, which would allow optimisation of the dosing schedule to provide maximal response in a single dose preparation. Both systematic and mucosal responses were induced following oral delivery of microencapsulated tetanus toxoid indicating that the encapsulation of the antigen into a microsphere preparation provides protection in the gut and allows targeting of the mucosal-associated lymphoid tissue. Co-encapsulation of adjuvants for further enhancement of immune response was also carried out and the effect on loading and release pattern assessed. Co-encapsulated F1 and interferon-gamma was administered i.p. and the immune responses compared with singly encapsulated and free subunit antigen.
Resumo:
This research focused on the formation of particulate delivery systems for the sub-unit fusion protein, Ag85B-ESAT-6, a promising tuberculosis (TB) vaccine candidate. Initial work concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyl dioctadecyl ammonium (DDA). These studies demonstrated that addition of the immunomodulatory trehalose dibehenate (TDB) enhanced the physical stability of the system whilst also adding further adjuvanticity. Indeed, this formulation was effective in stimulating both a cell mediated and humoural immune response. In order to investigate an alternative to the DDA-TDB system, microspheres based on poly(DL-lactide-co-glycolide) (PLGA) incorporating the adjuvants DDA and TDB, either alone or in combination, were first optimised in terms of physico-chemical characteristics, followed by immunological analysis. The formulation incorporating PLGA and DDA emerged as the lead candidate, with promising protection data against TB. Subsequent optimisation of the lead microsphere formulation investigated the effect of several variables involved in the formulation process on physico-chemical and immunological characteristics of the particles produced. Further, freeze-drying studies were carried out with both sugar-based and amino acid-based cryoprotectants, in order to formulate a stable freexe-dried product. Finally, environmental scanning electron microscopy (ESEM) was investigated as a potential alternative to conventional SEM for the morphological investigation of microsphere formulations. Results revealed that the DDA-TDB liposome system proved to be the most immunologically efficient delivery vehicle studied, with high levels of antibody and cytokine production, particularly gamma-interferon (IFN-ϒ), considered the key cytokine marker for anti-mycobacterial immunity. Of the microsphere systems investigated, PLGA in combination with DDA showed the most promise, with an ability to initiate a broad spectrum of cytokine production, as well as antigen specific spleen cell proliferation comparable to that of the DDA-TDB formulation.
Resumo:
OObjectives: We explored the perceptions, views and experiences of diabetes education in people with type 2 diabetes who were participating in a UK randomized controlled trial of methods of education. The intervention arm of the trial was based on DESMOND, a structured programme of group education sessions aimed at enabling self-management of diabetes, while the standard arm was usual care from general practices. Methods: Individual semi-structured interviews were conducted with 36 adult patients, of whom 19 had attended DESMOND education sessions and 17 had been randomized to receive usual care. Data analysis was based on the constant comparative method. Results: Four principal orientations towards diabetes and its management were identified: `resisters', `identity resisters, consequence accepters', `identity accepters, consequence resisters' and `accepters'. Participants offered varying accounts of the degree of personal responsibility that needed to be assumed in response to the diagnosis. Preferences for different styles of education were also expressed, with many reporting that they enjoyed and benefited from group education, although some reported ambivalence or disappointment with their experiences of education. It was difficult to identify striking thematic differences between accounts of people on different arms of the trial, although there was some very tentative evidence that those who attended DESMOND were more accepting of a changed identity and its implications for their management of diabetes. Discussion: No one single approach to education is likely to suit all people newly diagnosed with diabetes, although structured group education may suit many. This paper identifies varying orientations and preferences of people with diabetes towards forms of both education and self-management, which should be taken into account when planning approaches to education.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The occurrence of spalling is a major factor in determining the fire resistance of concrete constructions. The apparently random occurrence of spalling has limited the development and application of fire resistance modelling for concrete structures. This Thesis describes an experimental investigation into the spalling of concrete on exposure to elevated temperatures. It has been shown that spalling may be categorised into four distinct types, aggregate spalling, corner spalling, surface spalling and explosive spalling. Aggregate spalling has been found to be a form of shear failure of aggregates local to the heated surface. The susceptibility of any particular concrete to aggregate spalling can be quantified from parameters which include the coefficients of thermal expansion of both the aggregate and the surrounding mortar, the size and thermal diffusivity of the aggregate and the rate of heating. Corner spalling, which is particularly significant for the fire resistance of concrete columns, is a result of concrete losing its tensile strength at elevated temperatures. Surface spalling is the result of excessive pore pressures within heated concrete. An empirical model has been developed to allow quantification of the pore pressures and a material failure model proposed. The dominant parameters are rate of heating, pore saturation and concrete permeability. Surface spalling may be alleviated by limiting pore pressure development and a number of methods to this end have been evaluated. Explosive spalling involves the catastrophic failure of a concrete element and may be caused by either of two distinct mechanisms. In the first instance, excessive pore pressures can cause explosive spalling, although the effect is limited principally to unloaded or relatively small specimens. A second cause of explosive spalling is where the superimposition of thermally induced stresses on applied load stresses exceed the concrete's strength.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
Drying is an important unit operation in process industry. Results have suggested that the energy used for drying has increased from 12% in 1978 to 18% of the total energy used in 1990. A literature survey of previous studies regarding overall drying energy consumption has demonstrated that there is little continuity of methods and energy trends could not be established. In the ceramics, timber and paper industrial sectors specific energy consumption and energy trends have been investigated by auditing drying equipment. Ceramic products examined have included tableware, tiles, sanitaryware, electrical ceramics, plasterboard, refractories, bricks and abrasives. Data from industry has shown that drying energy has not varied significantly in the ceramics sector over the last decade, representing about 31% of the total energy consumed. Information from the timber industry has established that radical changes have occurred over the last 20 years, both in terms of equipment and energy utilisation. The energy efficiency of hardwood drying has improved by 15% since the 1970s, although no significant savings have been realised for softwood. A survey estimating the energy efficiency and operating characteristics of 192 paper dryer sections has been conducted. Drying energy was found to increase to nearly 60% of the total energy used in the early 1980s, but has fallen over the last decade, representing 23% of the total in 1993. These results have demonstrated that effective energy saving measures, such as improved pressing and heat recovery, have been successfully implemented since the 1970s. Artificial neural networks have successfully been applied to model process characteristics of microwave and convective drying of paper coated gypsum cove. Parameters modelled have included product moisture loss, core gypsum temperature and quality factors relating to paper burning and bubbling defects. Evaluation of thermal and dielectric properties have highlighted gypsum's heat sensitive characteristics in convective and electromagnetic regimes. Modelling experimental data has shown that the networks were capable of simulating drying process characteristics to a high degree of accuracy. Product weight and temperature were predicted to within 0.5% and 5C of the target data respectively. Furthermore, it was demonstrated that the underlying properties of the data could be predicted through a high level of input noise.
Resumo:
This paper provides an updated review on fast pyrolysis of biomass for production of a liquid usually referred to as bio-oil. The technology of fast pyrolysis is described including the major reaction systems. The primary liquid product is characterised by reference to the many properties that impact on its use. These properties have caused increasingly extensive research to be undertaken to address properties that need modification and this area is reviewed in terms of physical, catalytic and chemical upgrading. Of particular note is the increasing diversity of methods and catalysts and particularly the complexity and sophistication of multi-functional catalyst systems. It is also important to see more companies involved in this technology area and increased take-up of evolving upgrading processes. © 2011 Elsevier Ltd.
Resumo:
Rhizocarpon geographicum (L.) DC. is one of the most widely distributed species of crustose lichens. This unusual organism comprises yellow-green ‘areolae’ that contain the algal symbiont which develop and grow on the surface of a non-lichenized, fungal ‘hypothallus’ that extends beyond the margin of the areolae to form a marginal ring. This species grows exceptionally slowly with annual radial growth rates (RGR) as low as 0.07 mm yr-1 and its considerable longevity has been exploited by geologists in the development of methods of dating the age of exposure of rock surfaces and glacial moraines (‘lichenometry’). Recent research has established some aspects of the basic biology of this important and interesting organism. This chapter describes the general structure of R. geographicum, how the areolae and hypothallus develop, why the lichen grows so slowly, the growth rate-size curve, and some aspects of the ecology of R. geographicum including whether the lichen can inhibit the growth of its neighbours by chemical means (‘allelopathy’). Finally, the importance of R. geographicum in direct and indirect lichenometry is reviewed.
Resumo:
Inference and optimisation of real-value edge variables in sparse graphs are studied using the tree based Bethe approximation optimisation algorithms. Equilibrium states of general energy functions involving a large set of real edge-variables that interact at the network nodes are obtained for networks in various cases. These include different cost functions, connectivity values, constraints on the edge bandwidth and the case of multiclass optimisation.
Resumo:
Renewable alternatives such as biofuels and optimisation of the engine operating parameters can enhance engine performance and reduce emissions. The temperature of the engine coolant is known to have significant influence on engine performance and emissions. Whereas much existing literature describes the effects of coolant temperature in engines using fossil derived fuels, very few studies have investigated these effects when biofuel is used as an alternative fuel. Jatropha oil is a non-edible biofuel which can substitute fossil diesel for compression ignition (CI) engine use. However, due to the high viscosity of Jatropha oil, technique such as transesterification, preheating the oil, mixing with other fuel is recommended for improved combustion and reduced emissions. In this study, Jatropha oil was blended separately with ethanol and butanol, at ratios of 80:20 and 70:30. The fuel properties of all four blends were measured and compared with diesel and jatropha oil. It was found that the 80% jatropha oil + 20% butanol blend was the most suitable alternative, as its properties were closest to that of diesel. A 2 cylinder Yanmar engine was used; the cooling water temperature was varied between 50°C and 95°C. In general, it was found that when the temperature of the cooling water was increased, the combustion process enhanced for both diesel and Jatropha-Butanol blend. The CO2 emissions for both diesel and biofuel blend were observed to increase with temperature. As a result CO, O2 and lambda values were observed to decrease when cooling water temperature increased. When the engine was operated using diesel, NOX emissions correlated in an opposite manner to smoke opacity; however, when the biofuel blend was used, NOX emissions and smoke opacity correlated in an identical manner. The brake thermal efficiencies were found to increase slightly as the temperature was increased. In contrast, for all fuels, the volumetric efficiency was observed to decrease as the coolant temperature was increased. Brake specific fuel consumption was observed to decrease as the temperature was increased and was higher on average when the biofuel was used, in comparison to diesel. The study concludes that the effects of engine coolant temperature on engine performance and emission characteristics differ between biofuel blend and fossil diesel operation. The coolant temperature needs to be optimised depending on the type of biofuel for optimum engine performance and reduced emissions.
Resumo:
This thesis presented a detailed research work on diamond materials. Chapter 1 is an overall introduction of the thesis. In the Chapter 2, the literature review on the physical, chemical, optical, mechanical, as well as other properties of diamond materials are summarised. Followed by this chapter, several advanced diamond growth and characterisation techniques used in experimental work are also introduced. Then, the successful installation and applications of chemical vapour deposition system was demonstrated in Chapter 4. Diamond growth on a variety of different substrates has been investigated such as on silicon, diamond-like carbon or silica fibres. In Chapter 5, the single crystalline diamond substrate was used as the substrate to perform femtosecond laser inscription. The results proved the potentially feasibility of this technique, which could be utilised in fabricating future biochemistry microfluidic channels on diamond substrates. In Chapter 6, the hydrogen-terminated nanodiamond powder was studied using impedance spectroscopy. Its intrinsic electrical properties and its thermal stability were presented and analysed in details. As the first PhD student within Nanoscience Research Group at Aston, my initial research work was focused on the installation and testing of the microwave plasma enhanced chemical vapour deposition system (MPECVD), which will be beneficial to all the future researchers in the group. The fundamental of the on MPECVD system will be introduced in details. After optimisation of the growth parameters, the uniform diamond deposition has been achieved with a good surface coverage and uniformity. Furthermore, one of the most significant contributions of this work is the successful pattern inscription on diamond substrates by femtosecond laser system. Previous research of femtosecond laser inscription on diamond was simple lines or dots, with little characterisation techniques were used. In my research work, the femtosecond laser has been successfully used to inscribe patterns on diamond substrate and fully characterisation techniques, e.g. by SEM, Raman, XPS, as well as AFM, have been carried out. After the femtosecond laser inscription, the depth of microfluidic channels on diamond film has been found to be 300~400 nm, with a graphitic layer thickness of 165~190 nm. Another important outcome of this work is the first time to characterise the electrical properties of hydrogenterminated nanodiamond with impedance spectroscopy. Based on the experimental evaluation and mathematic fitting, the resistance of hydrogen-terminated nanodiamond reduced to 0.25 MO, which were four orders of magnitude lower than untreated nanodiamond. Meanwhile, a theoretical equivalent circuit has been proposed to fit the results. Furthermore, the hydrogenterminated nanodiamond samples were annealed at different temperature to study its thermal stability. The XPS and FTIR results indicate that hydrogen-terminated nanodiamond will start to oxidize over 100ºC and the C-H bonds can survive up to 400ºC. This research work reports the fundamental electrical properties of hydrogen-terminated nanodiamond, which can be used in future applications in physical or chemical area.
Resumo:
This paper develops a structured method from the perspective of value to organise and optimise the business processes of a product servitised supply chain (PSSC). This method integrates the modelling tool of e3value with the associated value measurement, evaluation and analysis techniques. It enables visualisation, modelling and optimisation of the business processes of a PSSC. At the same time, the value co-creation and potential contribution to an organisation’s profitability can also be enhanced. The findings not only facilitate organisations that are attempting to adopt servitisation by helping avert any paradox, but also help a servitised organisation to identify the key business processes and clarify their influences to supply chain operations.