14 resultados para sequential speciation

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main aim of the work is to investigate sequential pyrolysis of willow SRC using two different heating rates (25 and 1500 °C/min) between 320 and 520 °C. Thermogravimetric analysis (TGA) and pyrolysis - gas chromatography - mass spectroscopy (Py-GC-MS) have been used for this analysis. In addition, laboratory scale processing has been undertaken to compare product distribution from fast and slow pyrolysis at 500 °C. Fast pyrolysis was carried out using a 1 kg/h continuous bubbling fluidized bed reactor, and slow pyrolysis using a 100 g batch reactor. Findings from this study show that heating rate and pyrolysis temperatures have a significant influence on the chemical content of decomposition products. From the analytical sequential pyrolysis, an inverse relationship was seen between the total yield of furfural (at high heating rates) and 2-furanmethanol (at low heating rates). The total yield of 1,2-dihydroxybenzene (catechol) was found to be significant higher at low heating rates. The intermediates of catechol, 2-methoxy-4-(2-propenyl)phenol (eugenol); 2-methoxyphenol (guaiacol); 4-Hydroxy-3,5-dimethoxybenzaldehyde (syringaldehyde) and 4-hydroxy-3-methoxybenzaldehyde (vanillin), were found to be highest at high heating rates. It was also found that laboratory scale processing alters the pyrolysis bio-oil chemical composition, and the proportions of pyrolysis product yields. The GC-MS/FID analysis of fast and slow pyrolysis bio-oils reveals significant differences. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sudden change applied to a single component can cause its segregation from an ongoing complex tone as a pure-tone-like percept. Three experiments examined whether such pure-tone-like percepts are organized into streams by extending the research of Bregman and Rudnicky (1975). Those authors found that listeners struggled to identify the presentation order of 2 pure-tone targets of different frequency when they were flanked by 2 lower frequency “distractors.” Adding a series of matched-frequency “captor” tones, however, improved performance by pulling the distractors into a separate stream from the targets. In the current study, sequences of discrete pure tones were substituted by sequences of brief changes applied to an otherwise constant 1.2-s complex tone. Pure-tone-like percepts were evoked by applying 6-dB increments to individual components of a complex comprising harmonics 1–7 of 300 Hz (Experiment 1) or 0.5-ms changes in interaural time difference to individual components of a log-spaced complex (range 160–905 Hz; Experiment 2). Results were consistent with the earlier study, providing clear evidence that pure-tone-like percepts are organized into streams. Experiment 3 adapted Experiment 1 by presenting a global amplitude increment either synchronous with, or just after, the last captor prior to the 1st distractor. In the former case, for which there was no pure-tone-like percept corresponding to that captor, the captor sequence did not aid performance to the same extent as previously. It is concluded that this change to the captor-tone stream partially resets the stream-formation process, and so the distractors and targets became likely to integrate once more. (PsycINFO Database Record (c) 2011 APA, all rights reserved)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim of the work is the implementation of a low temperature reforming (LT reforming) unit downstream the Haloclean pyrolyser in order to enhance the heating value of the pyrolysis gas. Outside the focus of this work was to gain a synthesis gas quality for further use. Temperatures between 400 °C and 500 °C were applied. A commercial pre-reforming catalyst on a nickel basis from Südchemie was chosen for LT reforming. As biogenic feedstock wheat straw has been used. Pyrolysis of wheat straw at 450 °C by means of Haloclean pyrolysis leads to 28% of char, 50% of condensate and 22% of gas. The condensate separates in a water phase and an organic phase. The organic phase is liquid, but contains viscous compounds. These compounds could underlay aging and could lead to solid tars which can cause post processing problems. Therefore, the implementation of a catalytic reformer is not only of interest from an energetic point of view, it is generally interesting for tar conversion purposes after pyrolysis applications. By using a fixed bed reforming unit at 450–490 °C and space velocities about 3000 l/h the pyrolysis gas volume flow could be increased to about 58%. This corresponds to a decrease of the yields of condensates by means of catalysis up to 17%, the yield of char remains unchanged, since pyrolysis conditions are the same. The heating value in the pyrolysis gas could be increased by the factor of 1.64. Hydrogen concentrations up to 14% could be realised.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An essential stage in endocytic coated vesicle recycling is the dissociation of clathrin from the vesicle coat by the molecular chaperone, 70-kDa heat-shock cognate protein (Hsc70), and the J-domain-containing protein, auxilin, in an ATP-dependent process. We present a detailed mechanistic analysis of clathrin disassembly catalyzed by Hsc70 and auxilin, using loss of perpendicular light scattering to monitor the process. We report that a single auxilin per clathrin triskelion is required for maximal rate of disassembly, that ATP is hydrolyzed at the same rate that disassembly occurs, and that three ATP molecules are hydrolyzed per clathrin triskelion released. Stopped-flow measurements revealed a lag phase in which the scattering intensity increased owing to association of Hsc70 with clathrin cages followed by serial rounds of ATP hydrolysis prior to triskelion removal. Global fit of stopped-flow data to several physically plausible mechanisms showed the best fit to a model in which sequential hydrolysis of three separate ATP molecules is required for the eventual release of a triskelion from the clathrin-auxilin cage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.