27 resultados para Ranking and Selection

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web service is one of the most fundamental technologies in implementing service oriented architecture (SOA) based applications. One essential challenge related to web service is to find suitable candidates with regard to web service consumer’s requests, which is normally called web service discovery. During a web service discovery protocol, it is expected that the consumer will find it hard to distinguish which ones are more suitable in the retrieval set, thereby making selection of web services a critical task. In this paper, inspired by the idea that the service composition pattern is significant hint for service selection, a personal profiling mechanism is proposed to improve ranking and recommendation performance. Since service selection is highly dependent on the composition process, personal knowledge is accumulated from previous service composition process and shared via collaborative filtering where a set of users with similar interest will be firstly identified. Afterwards a web service re-ranking mechanism is employed for personalised recommendation. Experimental studies are conduced and analysed to demonstrate the promising potential of this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a feature selection approach based on Gabor wavelet feature and boosting for face verification. By convolution with a group of Gabor wavelets, the original images are transformed into vectors of Gabor wavelet features. Then for individual person, a small set of significant features are selected by the boosting algorithm from a large set of Gabor wavelet features. The experiment results have shown that the approach successfully selects meaningful and explainable features for face verification. The experiments also suggest that for the common characteristics such as eyes, noses, mouths may not be as important as some unique characteristic when training set is small. When training set is large, the unique characteristics and the common characteristics are both important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the selectivity and market timing performance of a sample of 21 UK property funds over the period Q3 1977 through to Q2 1987. The main finding of which that there is evidence of some superior selectivity performance on the part of UK property funds but that there are few funds who are able to successfully time the market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods— Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This review is an output of the International Life Sciences Institute (ILSI) Europe Marker Initiative, which aims to identify evidence-based criteria for selecting adequate measures of nutrient effects on health through comprehensive literature review. Experts in cognitive and nutrition sciences examined the applicability of these proposed criteria to the field of cognition with respect to the various cognitive domains usually assessed to reflect brain or neurological function. This review covers cognitive domains important in the assessment of neuronal integrity and function, commonly used tests and their state of validation, and the application of the measures to studies of nutrition and nutritional intervention trials. The aim is to identify domain-specific cognitive tests that are sensitive to nutrient interventions and from which guidance can be provided to aid the application of selection criteria for choosing the most suitable tests for proposed nutritional intervention studies using cognitive outcomes. The material in this review serves as a background and guidance document for nutritionists, neuropsychologists, psychiatrists, and neurologists interested in assessing mental health in terms of cognitive test performance and for scientists intending to test the effects of food or food components on cognitive function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A predictability index was defined as the ratio of the variance of the optimal prediction to the variance of the original time series by Granger and Anderson (1976) and Bhansali (1989). A new simplified algorithm for estimating the predictability index is introduced and the new estimator is shown to be a simple and effective tool in applications of predictability ranking and as an aid in the preliminary analysis of time series. The relationship between the predictability index and the position of the poles and lag p of a time series which can be modelled as an AR(p) model are also investigated. The effectiveness of the algorithm is demonstrated using numerical examples including an application to stock prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes a case study involving information technology managers and their new programmer recruitment policy, but the primary interest is methodological. The processes of issue generation and selection and model conceptualization are described. Early use of “magnetic hexagons” allowed the generation of a range of issues, most of which would not have emerged if system dynamics elicitation techniques had been employed. With the selection of a specific issue, flow diagraming was used to conceptualize a model, computer implementation and scenario generation following naturally. Observations are made on the processes of system dynamics modeling, particularly on the need to employ general techniques of knowledge elicitation in the early stages of interventions. It is proposed that flexible approaches should be used to generate, select, and study the issues, since these reduce any biasing of the elicitation toward system dynamics problems and also allow the participants to take up the most appropriate problem- structuring approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The genome of the plant-colonizing bacterium Pseudomonas fluorescens SBW25 harbors a subset of genes that are expressed specifically on plant surfaces. The function of these genes is central to the ecological success of SBW25, but their study poses significant challenges because no phenotype is discernable in vitro. Here, we describe a genetic strategy with general utility that combines suppressor analysis with IVET (SPyVET) and provides a means of identifying regulators of niche-specific genes. Central to this strategy are strains carrying operon fusions between plant environment-induced loci (EIL) and promoterless 'dapB. These strains are prototrophic in the plant environment but auxotrophic on laboratory minimal medium. Regulatory elements were identified by transposon mutagenesis and selection for prototrophs on minimal medium. Approximately 106 mutants were screened for each of 27 strains carrying 'dapB fusions to plant EIL and the insertion point for the transposon determined in approximately 2,000 putative regulator mutants. Regulators were functionally characterized and used to provide insight into EIL phenotypes. For one strain carrying a fusion to the cellulose-encoding wss operon, five different regulators were identified including a diguanylate cyclase, the flagella activator, FleQ, and alginate activator, AmrZ (AlgZ). Further rounds of suppressor analysis, possible by virtue of the SPyVET strategy, revealed an additional two regulators including the activator AlgR, and allowed the regulatory connections to be determined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper examines institutional sources of product innovation with reference to the online gaming sector of Korea and the UK. It examines the combined impact of formal and informal institutions and their interaction with multiple case studies. Despite the growing importance of innovative products in contemporary entertainment (including interactive games), the ‘informal’ source of innovation has attracted limited attention. By closely looking at the idea exploration, generation and selection process (where creativity plays a major role), we intend to find out how values and public policy affect product innovation. This study shows that the value of Korean and UK online gaming firms (regardless of their different socio-economic contexts) plays an important role in generating product innovation. An additional point is that Korean firms are likely to take advantage of government policy support to overcome inadequate institutional settings in conjunction with the initial conditions of online game development.