15 resultados para sequential reduction processes
em Aston University Research Archive
Resumo:
Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Inventory parameter management and focused continuous improvement for repetitive batch manufacturers
Resumo:
What this thesis proposes is a methodology to assist repetitive batch manufacturers in the adoption of certain aspects of the Lean Production principles. The methodology concentrates on the reduction of inventory through the setting of appropriate batch sizes, taking account of the effect of sequence dependent set-ups and the identification and elimination of bottlenecks. It uses a simple Pareto and modified EBQ based analysis technique to allocate items to period order day classes based on a combination of each item's annual usage value and set-up cost. The period order day classes the items are allocated to are determined by the constraints limits in the three measured dimensions, capacity, administration and finance. The methodology overcomes the limitations associated with MRP in the area of sequence dependent set-ups, and provides a simple way of setting planning parameters taking this effect into account by concentrating on the reduction of inventory through the systematic identification and elimination of bottlenecks through set-up reduction processes, so allowing batch sizes to reduce. It aims to help traditional repetitive batch manufacturers in a route to continual improvement by: Highlighting those areas where change would bring the greatest benefits. Modelling the effect of proposed changes. Quantifying the benefits that could be gained through implementing the proposed changes. Simplifying the effort required to perform the modelling process. It concentrates on increasing flexibility through managed inventory reduction through rationally decreasing batch sizes, taking account of sequence dependent set-ups and the identification and elimination of bottlenecks. This was achieved through the development of a software modelling tool, and validated through a case study approach.
Resumo:
Continental red bed sequences are host, on a worldwide scale, to a characteristic style of mineralisation which is dominated by copper, lead, zinc, uranium and vanadium. This study examines the features of sediment-hosted ore deposits in the Permo-Triassic basins of Western Europe, with particular reference to the Cu-Pb-Zn-Ba mineralisation in the Cheshire Basin, northwest England, the Pb-Ba-F deposits of the Inner Moray Firth Basin, northeast Scotland, and the Pb-rich deposits of the Eifel and Oberpfalz regions, West Germany. The deposits occur primarily but not exclusively in fluvial and aeolian sandstones on the margins of deep, avolcanic sedimentary basins containing red beds, evaporites and occasionally hydrocarbons. The host sediments range in age from Permian to Rhaetian and often contain (or can be inferred to have originally contained) organic matter. Textural studies have shown that early diagenetic quartz overgrowths precede the main episode of sulphide deposition. Fluid inclusion and sulphur isotope data have significantly constrained the genetic hypotheses for the mineralisation and a model involving the expulsion of diagenetic fluids and basinal brines up the faulted margins of sedimentary basins is favoured. Consideration of the development of these sedimentary basins suggests that ore emplacement occurred during the tectonic stage of basin evolution or during basin inversion in the Tertiary. ð34S values for barite in the Cheshire Basin range from 13.8% to 19.3% and support the theory that the Upper Triassic evaporites were the principal sulphur source for the mineralisation and provided the means by which mineralising fluids became saline. In contrast, δ34S values for barite in the Inner Moray Firth Basin (mean δ34S = + 29%) are not consistent with simple derivation of sulphur from the evaporite horizons in the basin and it is likely that sulphur-rich Jurassic shales supplied the sulphur for the mineralisation at Elgin. Possible sources of sulphur for the mineralisation in West Germany include hydrothermal vein sulphides in the underlying Devonian sediments and evaporites in the overlying Muschelkalk. Textural studies of the deeply buried sandstones in the Cheshire Basin reveal widespread dissolution and replacement of detrital phases and support the theory that red bed diagenetic processes are responsible for the release of metals into pore fluids. The ore solutions are envisaged as being warm (60-150%C), saline (9-22 wt % equiv NaCl) fluids in which metals were transported as chloride complexes. The distribution of δ34S values for sulphides in the Cheshire Basin (-1.8% to + 16%), the Moray Firth Basin (-4.8% to + 27%) and the German Permo-Triassic Basins (-22.2% to -12.2%) preclude a magmatic source for the sulphides and support the contention that sulphide precipitation is thought to result principally from sulphate reduction processes, although a decrease in temperature of the ore fluid or reaction with carbonates may also be important. Methane is invoked as the principal reducing agent in the Cheshire Basin, whilst terrestrial organic debris and bacterial reduction processes are thought to have played a major part in the genesis of the German ore deposits.
Resumo:
While conventional Data Envelopment Analysis (DEA) models set targets for each operational unit, this paper considers the problem of input/output reduction in a centralized decision making environment. The purpose of this paper is to develop an approach to input/output reduction problem that typically occurs in organizations with a centralized decision-making environment. This paper shows that DEA can make an important contribution to this problem and discusses how DEA-based model can be used to determine an optimal input/output reduction plan. An application in banking sector with limitation in IT investment shows the usefulness of the proposed method.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
A systematic survey of the possible methods of chemical extraction of iron by chloride formation has been presented and supported by a comparable study of :feedstocks, products and markets. The generation and evaluation of alternative processes was carried out by the technique of morphological analysis vihich was exploited by way of a computer program. The final choice was related to technical feasibility and economic viability, particularly capital cost requirements and developments were made in an estimating procedure for hydrometallurgjcal processes which have general applications. The systematic exploration included the compilation of relevant data, and this indicated a need.to investigate precipitative hydrolysis or aqueous ferric chloride. Arising from this study, two novel hydrometallurgical processes for manufacturing iron powder are proposed and experimental work was undertaken in the following .areas to demonstrate feasibility and obtain basic data for design purposes: (1) Precipitative hydrolysis of aqueous ferric chloride. (2) Gaseous chloridation of metallic iron, and oxidation of resultant ferrous chloride. (3) Reduction of gaseous ferric chloride with hydrogen. (4) Aqueous acid leaching of low grade iron ore. (5) Aqueous acid leaching of metallic iron. The experimentation was supported by theoretical analyses dealing with: (1) Thermodynamics of hydrolysis. (2) Kinetics of ore leaching. (3) Kinetics of metallic iron leaching. (4) Crystallisation of ferrous chloride. (5) Oxidation of anhydrous ferrous chloride. (6) Reduction of ferric chloride. Conceptual designs are suggested fbr both the processes mentioned. These draw attention to areas where further work is necessary, which are listed. Economic analyses have been performed which isolate significant cost areas, und indicate total production costs. Comparisons are mode with previous and analogous proposals for the production of iron powder.
Resumo:
A wide range of molecules acting as apoptotic cell-associated ligands, phagocyte-associated receptors or soluble bridging molecules have been implicated within the complex sequential processes that result in phagocytosis and degradation of apoptotic cells. Intercellular adhesion molecule 3 (ICAM-3, also known as CD50), a human leukocyte-restricted immunoglobulin super-family (IgSF) member, has previously been implicated in apoptotic cell clearance, although its precise role in the clearance process is ill defined. The main objective of this work is to further characterise the function of ICAM-3 in the removal of apoptotic cells. Using a range of novel anti-ICAM-3 monoclonal antibodies (mAbs), including one (MA4) that blocks apoptotic cell clearance by macrophages, alongside apoptotic human leukocytes that are normal or deficient for ICAM-3, we demonstrate that ICAM-3 promotes a domain 1-2-dependent tethering interaction with phagocytes. Furthermore, we demonstrate an apoptosis-associated reduction in ICAM-3 that results from release of ICAM-3 within microparticles that potently attract macrophages to apoptotic cells. Taken together, these data suggest that apoptotic cell-derived microparticles bearing ICAM-3 promote macrophage chemoattraction to sites of leukocyte cell death and that ICAM-3 mediates subsequent cell corpse tethering to macrophages. The defined function of ICAM-3 in these processes and profound defect in chemotaxis noted to ICAM-3-deficient microparticles suggest that ICAM-3 may be an important adhesion molecule involved in chemotaxis to apoptotic human leukocytes. © 2012 Macmillan Publishers Limited All rights reserved.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
The nature and kinetics of electrode reactions and processes occurring for four lightweight anode systems which have been utilised in reinforced concrete cathodic protection systems have been studied. The anodes investigated were flame sprayed zinc, conductive paint and two activated titanium meshes. The electrochemical properties of each material were investigated in rapidly stirred de-oxygenated electrolytes using anodic potentiodynamic polarisation. Conductive coating electrodes were formed on glass microscope slides, whilst mesh strands were immersed directly. Oxygen evolution occurred preferentially for both mesh anodes in saturated Ca (OH)2/CaC12 solutions but was severely inhibited in less alkaline solutions and significant current only passed in chloride solutions. The main reactions for conductive paint was based on oxygen evolution in all electrolytes, although chlorides increased the electrical activity. Self-corrosion of zinc was controlled by electrolyte composition and the experimental set-up, chlorides increasing the electrical activity. Impressed current cathodic protection was applied to 25 externally exposed concrete slabs over a period of 18 months to investigate anode degradation mechanisms at normal and high current densities. Specimen chloride content, curing and reinforcement depth were also variables. Several destructive and non-destructive methods for assessing the performance of anodes were evaluated including a site instrument for quantitative "instant-off- potential measurements. The impact of cathodic protection on the concrete substrate was determined for a number of specimens using appropriate methods. Anodic degradation rates were primarily influenced by current density, followed by cemendtious alkalinity, chloride levels and by current distribution. Degradation of cementitious overlays and conductive paint substrates proceeded by sequential neutralisation of cement phases, with some evidence of paint binder oxidation. Sprayed zinc progressively formed an insulating layer of hydroxide complexes, which underwent pitting_ attack in the presence of sufficient chlorides, whilst substrate degradation was minimal. Adhesion of all anode systems decreased with increasing current density. The influence of anode material on the ionic gradients which can develop during cathodic protection was investigated. A constant current was passed through saturated cement paste prisms containing calcium chloride to central cathodes via anodes applied or embedded at each end. Pore solution was obtained from successive cut paste slices for anion and cation analyses. Various experimental errors reduced the value of the results. Characteristic S-shaped profiles were not observed and chloride ion profiles were ambiguous. Mesh anode specimens were significantly more durable than the conductive coatings in the high humidity environment. Limited results suggested zinc ion migration to the cathode region. Electrical data from each investigation clearly indicated a decreasing order of anode efficiency by specific anode material.
Resumo:
This thesis is concerned with the role of diagenesis in forming ore deposits. Two sedimentary 'ore-types' have been examined; the Proterozoic copper-cobalt orebodies of the Konkola Basin on the Zambian Copperbelt, and the Permian Marl Slate of North East England. Facies analysis of the Konkola Basin shows the Ore-Shale to have formed in a subtidal to intertidal environment. A sequence of diagenetic events is outlined from which it is concluded that the sulphide ores are an integral part of the diagenetic process. Sulphur isotope data establish that the sulphides formed as a consequence of the bacterial reduction of sulphate, while the isotopic and geochemical composition of carbonates is shown to reflect changes in the compositions of diagenetic pore fluids. Geochemical studies indicate that the copper and cobalt bearing mineralising fluids probably had different sources. Veins which crosscut the orebodies contain hydrocarbon inclusions, and are shown to be of late diagenetic lateral secretion origin. RbiSr dating indicates that the Ore-Shale was subject to metamorphism at 529 A- 20 myrs. The sedimentology and petrology of the Marl Slate are described. Textural and geochemical studies suggest that much of the pyrite (framboidal) in the Marl Slate formed in an anoxic water column, while euhedral pyrite and base metal sulphides formed within the sediment during early diagenesis. Sulphur isotope data confirm that conditions were almost "ideal" for sulphide formation during Marl Slate deposition, the limiting factors in ore formation being the restricted supply of chalcophile elements. Carbon and oxygen isotope data, along with petrographic observations, indicate that much of the calcite and dolomite occurring in the Marl Slate is primary, and probably formed in isotopic equilibrium. A depositional model is proposed which explains all of the data presented and links the lithological variations with fluctuations in the anoxicioxic boundary layer of the water column.
Resumo:
The work described in this thesis is directed towards the reduction of noise levels in the Hoover Turbopower upright vacuum cleaner. The experimental work embodies a study of such factors as the application of noise source identification techniques, investigation of the noise generating principles for each major source and evaluation of the noise reducing treatments. It was found that the design of the vacuum cleaner had not been optimised from the standpoint of noise emission. Important factors such as noise `windows', isolation of vibration at the source, panel rattle, resonances and critical speeds had not been considered. Therefore, a number of experimentally validated treatments are proposed. Their noise reduction benefit together with material and tooling costs are presented. The solutions to the noise problems were evaluated on a standard Turbopower and the sound power level of the cleaner was reduced from 87.5 dB(A) to 80.4 db(A) at a cost of 93.6 pence per cleaner.The designers' lack of experience in noise reduction was identified as one of the factors for the low priority given to noise during design of the cleaner. Consequently, the fundamentals of acoustics, principles of noise prediction and absorption and guidelines for good acoustical design were collated into a Handbook and circulated at Hoover plc.Mechanical variations during production of the motor and the cleaner were found to be important. These caused a vast spread in the noise levels of the cleaners. Subsequently, the manufacturing processes were briefly studied to identify their source and recommendations for improvement are made.Noise of a product is quality related and a high level of noise is considered to be a bad feature. This project suggested that the noise level be used constructively both as a test on the production line to identify cleaners above a certain noise level and also to promote the product by `designing' the characteristics of the sound so that the appliance is pleasant to the user. This project showed that good noise control principles should be implemented early in the design stage.As yet there are no mandatory noise limits or noise-labelling requirements for household appliances. However, the literature suggests that noise-labelling is likely in the near future and the requirement will be to display the A-weighted sound power level. However, the `noys' scale of perceived noisiness was found more appropriate to the rating of appliance noise both as it is linear and therefore, a sound level that seems twice as loud is twice the value in noys and also takes into consideration the presence of pure tones, which even in the absence of a high noise level can lead to annoyance.
Resumo:
In many real applications of Data Envelopment Analysis (DEA), the decision makers have to deteriorate some inputs and some outputs. This could be because of limitation of funds available. This paper proposes a new DEA-based approach to determine highest possible reduction in the concern input variables and lowest possible deterioration in the concern output variables without reducing the efficiency in any DMU. A numerical example is used to illustrate the problem. An application in banking sector with limitation of IT investment shows the usefulness of the proposed method. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Reliability modelling and verification is indispensable in modern manufacturing, especially for product development risk reduction. Based on the discussion of the deficiencies of traditional reliability modelling methods for process reliability, a novel modelling method is presented herein that draws upon a knowledge network of process scenarios based on the analytic network process (ANP). An integration framework of manufacturing process reliability and product quality is presented together with a product development and reliability verification process. According to the roles of key characteristics (KCs) in manufacturing processes, KCs are organised into four clusters, that is, product KCs, material KCs, operation KCs and equipment KCs, which represent the process knowledge network of manufacturing processes. A mathematical model and algorithm is developed for calculating the reliability requirements of KCs with respect to different manufacturing process scenarios. A case study on valve-sleeve component manufacturing is provided as an application example of the new reliability modelling and verification procedure. This methodology is applied in the valve-sleeve component manufacturing processes to manage and deploy production resources.