19 resultados para Minimisation

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper contributes a new methodology called Waste And Source-matter ANalyses (WASAN) which supports a group in building agreeable actions for safely minimising avoidable waste. WASAN integrates influences from the Operational Research (OR) methodologies/philosophies of Problem Structuring Methods, Systems Thinking, simulation modelling and sensitivity analysis as well as industry approaches of Waste Management Hierarchy, Hazard Operability (HAZOP) Studies and As Low As Reasonably Practicable (ALARP). The paper shows how these influences are compiled into facilitative structures that support managers in developing recommendations on how to reduce avoidable waste production. WASAN is being designed as Health and Safety Executive Guidance on what constitutes good decision making practice for the companies that manage nuclear sites. In this paper we report and reflect on its use in two soft OR/problem structuring workshops conducted on radioactive waste in the nuclear industry. Crown Copyright © 2010.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about km800, carrying a C-band scatterometer. A scatterometer measures the amount of radar back scatter generated by small ripples on the ocean surface induced by instantaneous local winds. Operational methods that extract wind vectors from satellite scatterometer data are based on the local inversion of a forward model, mapping scatterometer observations to wind vectors, by the minimisation of a cost function in the scatterometer measurement space.par This report uses mixture density networks, a principled method for modelling conditional probability density functions, to model the joint probability distribution of the wind vectors given the satellite scatterometer measurements in a single cell (the `inverse' problem). The complexity of the mapping and the structure of the conditional probability density function are investigated by varying the number of units in the hidden layer of the multi-layer perceptron and the number of kernels in the Gaussian mixture model of the mixture density network respectively. The optimal model for networks trained per trace has twenty hidden units and four kernels. Further investigation shows that models trained with incidence angle as an input have results comparable to those models trained by trace. A hybrid mixture density network that incorporates geophysical knowledge of the problem confirms other results that the conditional probability distribution is dominantly bimodal.par The wind retrieval results improve on previous work at Aston, but do not match other neural network techniques that use spatial information in the inputs, which is to be expected given the ambiguity of the inverse problem. Current work uses the local inverse model for autonomous ambiguity removal in a principled Bayesian framework. Future directions in which these models may be improved are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The international nuclear community continues to face the challenge of managing both the legacy waste and the new wastes that emerge from ongoing energy production. The UK is in the early stages of proposing a new convention for its nuclear industry, that is: waste minimisation through closely managing the radioactive source which creates the waste. This paper proposes a new technique (called waste and source material operability study (WASOP)) to qualitatively analyse a complex, waste-producing system to minimise avoidable waste and thus increase the protection to the public and the environment. Design/methodology/approach – WASOP critically considers the systemic impact of up and downstream facilities on the minimisation of nuclear waste in a facility. Based on the principles of HAZOP, the technique structures managers' thinking on the impact of mal-operations in interlinking facilities in order to identify preventative actions to reduce the impact on waste production of those mal-operations.' Findings – WASOP was tested with a small group of experienced nuclear regulators and was found to support their qualitative examination of waste minimisation and help them to work towards developing a plan of action. Originality/value – Given the newness of this convention, the wider methodology in which WASOP sits is still in development. However, this paper communicates the latest thinking from nuclear regulators on decision-making methodology for supporting waste minimisation and is hoped to form part of future regulatory guidance. WASOP is believed to have widespread potential application to the minimisation of many other forms of waste, including that from other energy sectors and household/general waste.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary objective of this research was to examine the concepts of the chemical modification of polymer blends by reactive processing using interlinking agents (multi-functional, activated vinyl compounds; trimethylolpropane triacrylates {TRIS} and divinylbenzene {DVD}) to target in-situ interpolymer formation between immiscible polymers in PS/EPDM blends via peroxide-initiated free radical reactions during melt mixing. From a comprehensive survey of previous studies of compatibility enhancement in polystyrene blends, it was recognised that reactive processing offers opportunities for technological success that have not yet been fully realised; learning from this study is expected to assist in the development and application of this potential. In an experimental-scale operation for the simultaneous melt blending and reactive processing of both polymers, involving manual injection of precise reactive agent/free radical initiator mixtures directly into molten polymer within an internal mixer, torque changes were distinct, quantifiable and rationalised by ongoing physical and chemical effects. EPDM content of PS/EPDM blends was the prime determinant of torque increases on addition of TRIS, itself liable to self-polymerisation at high additions, with little indication of PS reaction in initial reactively processed blends with TRIS, though blend compatibility, from visual assessment of morphology by SEM, was nevertheless improved. Suitable operating windows were defined for the optimisation of reactive blending, for use once routes to encourage PS reaction could be identified. The effectiveness of PS modification by reactive processing with interlinking agents was increased by the selection of process conditions to target specific reaction routes, assessed by spectroscopy (FT-IR and NMR) and thermal analysis (DSC) coupled dichloromethane extraction and fractionation of PS. Initiator concentration was crucial in balancing desired PS modification and interlinking agent self-polymerisation, most particularly with TRIS. Pre-addition of initiator to PS was beneficial in the enhancement of TRIS binding to PS and minimisation of modifier polymerisation; believed to arise from direct formation of polystyryl radicals for addition to active unsaturation in TRIS. DVB was found to be a "compatible" modifier for PS, but its efficacy was not quantified. Application of routes for PS reaction in PS/EPDM blends was successful for in-situ formation of interpolymer (shown by sequential solvent extraction combined with FT-IR and DSC analysis); the predominant outcome depending on the degree of reaction of each component, with optimum "between-phase" interpolymer formed under conditions selected for equalisation of differing component reactivities and avoidance of competitive processes. This was achieved for combined addition of TRIS+DVB at optimum initiator concentrations with initiator pre-addition to PS. Improvements in blend compatibility (by tensiles, SEM and thermal analysis) were shown in all cases with significant interpolymer formation, though physical benefits were not; morphology and other reactive effects were also important factors. Interpolymer from specific "between-phase" reaction of blend components and interlinking agent was vital for the realisation of positive performance on compatibilisation by the chemical modification of polymer blends by reactive processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis seeks to describe the development of an inexpensive and efficient clustering technique for multivariate data analysis. The technique starts from a multivariate data matrix and ends with graphical representation of the data and pattern recognition discriminant function. The technique also results in distances frequency distribution that might be useful in detecting clustering in the data or for the estimation of parameters useful in the discrimination between the different populations in the data. The technique can also be used in feature selection. The technique is essentially for the discovery of data structure by revealing the component parts of the data. lhe thesis offers three distinct contributions for cluster analysis and pattern recognition techniques. The first contribution is the introduction of transformation function in the technique of nonlinear mapping. The second contribution is the us~ of distances frequency distribution instead of distances time-sequence in nonlinear mapping, The third contribution is the formulation of a new generalised and normalised error function together with its optimal step size formula for gradient method minimisation. The thesis consists of five chapters. The first chapter is the introduction. The second chapter describes multidimensional scaling as an origin of nonlinear mapping technique. The third chapter describes the first developing step in the technique of nonlinear mapping that is the introduction of "transformation function". The fourth chapter describes the second developing step of the nonlinear mapping technique. This is the use of distances frequency distribution instead of distances time-sequence. The chapter also includes the new generalised and normalised error function formulation. Finally, the fifth chapter, the conclusion, evaluates all developments and proposes a new program. for cluster analysis and pattern recognition by integrating all the new features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The chromosomal ß-lactamase of Pseudomonas aeruginosa SAlconst (a derepressed laboratory strain) was isolated and purified. Two peaks of activity were observed on gel permeation chromatography (one major peak mol. wt. 45 kD and one minor peak of 54 kD). Preparations from 12 clinical derepressed strains showed identical results. Chromosomal ß-lactamase production in both normal and derepressed P. aeruginosa strains was induced both by iron restricted growth conditions and by penicillin G. The majority of the enzyme (80-90%) was found in the periplasm and cytoplasm but a significant amount (2-20%) was associated with the outer membrane (OM). The growth conditions did not affect the distribution of the enzyme between subcellular fractions although higher activity was found in the cells grown under iron limitation and/ or in the presence of ß-lactams. The penicillanate sulphone inhibitor, tazobactam, displayed irreversible kinetics whilst cloxacillin, cefotaxime, ampicillin and penicillin G were all competitive inhibitors of the enzyme. Similar results were obtained for the Enterobacter cloacae P99 [ß-lactamase, but tazobactam displayed a non-classical kinetic pattern for the Staphylococcus aureus PC1 ß-lactamase. The residues involved in ß-lactam hydrolysis by the P aeruginosa SAlconst enzyme were detennined by affinity labelling with tazobactam. A tryptic digestion fragment of the inhibited enzyme contained the amino acids D, T, S, E, P, G, A, C, V, M, I, Y, F, H, K, R. This suggests the involvement of the conserved SVSK, DAE and KTG motifs found in all penicillin sensitive proteins. A model of the 3-D structure of the active site of the P aeruginosa SAlconst chromosomal ß-!actamase was constructed from the published amino acid sequence of P aeruginosa chromosomal ß-lactamase and the a-carbon coordinates of the S. aureus PCI ß-lactamase by homology modelling and energy minimisation. The crystal structure of tazobactam was determined and energy minimised. Computer graphics docking identified Ser 72 as a possible residue involved in a secondary attack on the C5 position of tazobactam after initial ß-lactam hydrolysis by serine 70. The enhanced activity of tazobactam over sulbactam might be explained by the triazole substituent which might participate in favourable hydrogen bonding between N3 and active site residues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis deals with the background, development and description of a mathematical stock control methodology for use within an oil and chemical blending company, where demand and replenishment lead-times are generally non-stationary. The stock control model proper relies on, as input, adaptive forecasts of demand determined for an economical forecast/replenishment period precalculated on an individual stock-item basis. The control procedure is principally that of the continuous review, reorder level type, where the reorder level and reorder quantity 'float', that is, each changes in accordance with changes in demand. Two versions of the Methodology are presented; a cost minimisation version and a service level version. Realising the importance of demand forecasts, four recognised variations of the Trigg and Leach adaptive forecasting routine are examined. A fifth variation, developed, is proposed as part of the stock control methodology. The results of testing the cost minimisation version of the Methodology with historical data, by means of a computerised simulation, are presented together with a description of the simulation used. The performance of the Methodology is in addition compared favourably to a rule-of-thumb approach considered by the Company as an interim solution for reducing stack levels. The contribution of the work to the field of scientific stock control is felt to be significant for the following reasons:- (I) The Methodology is designed specifically for use with non-stationary demand and for this reason alone appears to be unique. (2) The Methodology is unique in its approach and the cost-minimisation version is shown to work successfully with the demand data presented. (3) The Methodology and the thesis as a whole fill an important gap between complex mathematical stock control theory and practical application. A brief description of a computerised order processing/stock monitoring system, designed and implemented as a pre-requisite for the Methodology's practical operation, is presented as an appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human leukocyte antigen (HLA)-DM is a critical participant in antigen presentation that catalyzes the dissociation of the Class II-associated Invariant chain-derived Peptide (CLIP) from the major histocompatibility complex (MHC) Class II molecules. There is competition amongst peptides for access to an MHC Class II groove and it has been hypothesised that DM functions as a 'peptide editor' that catalyzes the replacement of one peptide for another within the groove. It is established that the DM catalyst interacts directly with the MHC Class II but the precise location of the interface is unknown. Here, we combine previously described mutational data with molecular docking and energy minimisation simulations to identify a putative interaction site of >4000A2 which agrees with known point mutational data for both the DR and DM molecule. The docked structure is validated by comparison with experimental data and previously determined properties of protein-protein interfaces. A possible dissociation mechanism is suggested by the presence of an acidic cluster near the N terminus of the bound peptide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.