918 resultados para Sequential auctions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sudden change applied to a single component can cause its segregation from an ongoing complex tone as a pure-tone-like percept. Three experiments examined whether such pure-tone-like percepts are organized into streams by extending the research of Bregman and Rudnicky (1975). Those authors found that listeners struggled to identify the presentation order of 2 pure-tone targets of different frequency when they were flanked by 2 lower frequency “distractors.” Adding a series of matched-frequency “captor” tones, however, improved performance by pulling the distractors into a separate stream from the targets. In the current study, sequences of discrete pure tones were substituted by sequences of brief changes applied to an otherwise constant 1.2-s complex tone. Pure-tone-like percepts were evoked by applying 6-dB increments to individual components of a complex comprising harmonics 1–7 of 300 Hz (Experiment 1) or 0.5-ms changes in interaural time difference to individual components of a log-spaced complex (range 160–905 Hz; Experiment 2). Results were consistent with the earlier study, providing clear evidence that pure-tone-like percepts are organized into streams. Experiment 3 adapted Experiment 1 by presenting a global amplitude increment either synchronous with, or just after, the last captor prior to the 1st distractor. In the former case, for which there was no pure-tone-like percept corresponding to that captor, the captor sequence did not aid performance to the same extent as previously. It is concluded that this change to the captor-tone stream partially resets the stream-formation process, and so the distractors and targets became likely to integrate once more. (PsycINFO Database Record (c) 2011 APA, all rights reserved)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim of the work is the implementation of a low temperature reforming (LT reforming) unit downstream the Haloclean pyrolyser in order to enhance the heating value of the pyrolysis gas. Outside the focus of this work was to gain a synthesis gas quality for further use. Temperatures between 400 °C and 500 °C were applied. A commercial pre-reforming catalyst on a nickel basis from Südchemie was chosen for LT reforming. As biogenic feedstock wheat straw has been used. Pyrolysis of wheat straw at 450 °C by means of Haloclean pyrolysis leads to 28% of char, 50% of condensate and 22% of gas. The condensate separates in a water phase and an organic phase. The organic phase is liquid, but contains viscous compounds. These compounds could underlay aging and could lead to solid tars which can cause post processing problems. Therefore, the implementation of a catalytic reformer is not only of interest from an energetic point of view, it is generally interesting for tar conversion purposes after pyrolysis applications. By using a fixed bed reforming unit at 450–490 °C and space velocities about 3000 l/h the pyrolysis gas volume flow could be increased to about 58%. This corresponds to a decrease of the yields of condensates by means of catalysis up to 17%, the yield of char remains unchanged, since pyrolysis conditions are the same. The heating value in the pyrolysis gas could be increased by the factor of 1.64. Hydrogen concentrations up to 14% could be realised.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An essential stage in endocytic coated vesicle recycling is the dissociation of clathrin from the vesicle coat by the molecular chaperone, 70-kDa heat-shock cognate protein (Hsc70), and the J-domain-containing protein, auxilin, in an ATP-dependent process. We present a detailed mechanistic analysis of clathrin disassembly catalyzed by Hsc70 and auxilin, using loss of perpendicular light scattering to monitor the process. We report that a single auxilin per clathrin triskelion is required for maximal rate of disassembly, that ATP is hydrolyzed at the same rate that disassembly occurs, and that three ATP molecules are hydrolyzed per clathrin triskelion released. Stopped-flow measurements revealed a lag phase in which the scattering intensity increased owing to association of Hsc70 with clathrin cages followed by serial rounds of ATP hydrolysis prior to triskelion removal. Global fit of stopped-flow data to several physically plausible mechanisms showed the best fit to a model in which sequential hydrolysis of three separate ATP molecules is required for the eventual release of a triskelion from the clathrin-auxilin cage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper analyzes auctions which are not completely enforceable. In such auctions, economic agents may fail to carry out their obligations, and parties involved cannot rely on external enforcement or control mechanisms for backing up a transaction. We propose two mechanisms that make bidders directly or indirectly reveal their trustworthiness. The first mechanism is based on discriminating bidding schedules that separate trustworthy from untrustworthy bidders. The second mechanism is a generalization of the Vickrey auction to the case of untrustworthy bidders. We prove that, if the winner is considered to have the trustworthiness of the second-highest bidder, truthfully declaring one's trustworthiness becomes a dominant strategy. We expect the proposed mechanisms to reduce the cost of trust management and to help agent designers avoid many market failures caused by lack of trust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

∗ Supported by the Serbian Scientific Foundation, grant No 04M01

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequential pattern mining is an important subject in data mining with broad applications in many different areas. However, previous sequential mining algorithms mostly aimed to calculate the number of occurrences (the support) without regard to the degree of importance of different data items. In this paper, we propose to explore the search space of subsequences with normalized weights. We are not only interested in the number of occurrences of the sequences (supports of sequences), but also concerned about importance of sequences (weights). When generating subsequence candidates we use both the support and the weight of the candidates while maintaining the downward closure property of these patterns which allows to accelerate the process of candidate generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses a problem with an argument in Kranich, Perea, and Peters (2005) supporting their definition of the Weak Sequential Core and their characterization result. We also provide the remedy, a modification of the definition, to rescue the characterization.