985 resultados para ranking method
Resumo:
VHB-JOURQUAL represents the official journal ranking of the German Academic Association for Business Research. Since its introduction in 2003, the ranking has become the most influential journal evaluation approach in German-speaking countries, impacting several key managerial decisions of German, Austrian, and Swiss business schools. This article reports the methodological approach of the ranking’s second edition. It also presents the main results and additional analyses on the validity of the rating and the underlying decision processes of the respondents. Selected implications for researchers and higher-education institutions are discussed.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
Two hazard risk assessment matrices for the ranking of occupational health risks are described. The qualitative matrix uses qualitative measures of probability and consequence to determine risk assessment codes for hazard-disease combinations. A walk-through survey of an underground metalliferous mine and concentrator is used to demonstrate how the qualitative matrix can be applied to determine priorities for the control of occupational health hazards. The semi-quantitative matrix uses attributable risk as a quantitative measure of probability and uses qualitative measures of consequence. A practical application of this matrix is the determination of occupational health priorities using existing epidemiological studies. Calculated attributable risks from epidemiological studies of hazard-disease combinations in mining and minerals processing are used as examples. These historic response data do not reflect the risks associated with current exposures. A method using current exposure data, known exposure-response relationships and the semi-quantitative matrix is proposed for more accurate and current risk rankings.
Resumo:
Performance evaluation increasingly assumes a more important role in any organizational environment. In the transport area, the drivers are the company’s image and for this reason it is important to develop and increase their performance and commitment to the company goals. This evaluation can be used to motivate driver to improve their performance and to discover training needs. This work aims to create a performance appraisal evaluation model of the drivers based on the multi-criteria decision aid methodology. The MMASSI (Multicriteria Methodology to Support Selection of Information Systems) methodology was adapted by using a template supporting the evaluation according to the freight transportation company in study. The evaluation process involved all drivers (collaborators being evaluated), their supervisors and the company management. The final output is a ranking of the drivers, based on their performance, for each one of the scenarios used.
Resumo:
The aim of this paper is twofold: firstly, to carry out a theoreticalreview of the most recent stated preference techniques used foreliciting consumers preferences and, secondly, to compare the empiricalresults of two dierent stated preference discrete choice approaches.They dier in the measurement scale for the dependent variable and,therefore, in the estimation method, despite both using a multinomiallogit. One of the approaches uses a complete ranking of full-profiles(contingent ranking), that is, individuals must rank a set ofalternatives from the most to the least preferred, and the other usesa first-choice rule in which individuals must select the most preferredoption from a choice set (choice experiment). From the results werealize how important the measurement scale for the dependent variablebecomes and, to what extent, procedure invariance is satisfied.
Resumo:
Seudullinen innovaatio on monimutkainen ilmiö, joka usein sijaitsee paikallisten toimijoiden keskinäisen vuorovaikutuksen kentässä. Täten sitä on perinteisesti pidetty vaikeasti mitattavana ilmiönä. Työssä sovellettiin Data Envelopment Analysis menetelmää, joka on osoittautunut aiemmin menestyksekkääksi tapauksissa, joissa mitattavien syötteiden ja tuotteiden väliset suhteet eivät ole olleet ilmeisiä. Työssä luotiin konseptuaalinen malli seudullisen innovaation syötteistä ja tuotteista, jonka perusteella valittiin 12 tilastollisen muuttujan mittaristo. Käyttäen Eurostat:ia datalähteenä, lähdedata kahdeksaan muuttujsta saatiin seudullisella tasolla, sekä mittaristoa täydennettiin yhdellä kansallisella muuttujalla. Arviointi suoritettiin lopulta 45 eurooppalaiselle seudulle. Tutkimuksen painopiste oli arvioida DEA-menetelmän soveltuvuutta innovaatio-järjestelmän mittaamiseen, sillä menetelmää ei ole aiemmin sovellettu vastaavassa tapauksessa. Ensimmäiset tulokset osoittivat ylipäätään liiallisen korkeita tehok-kuuslukuja. Korjaustoimenpiteitä erottelutarkkuuden parantamiseksi esiteltiin ja sovellettiin, jonka jälkeen saatiin realistisempia tuloksia ja ranking-lista arvioitavista seuduista. DEA-menetelmän todettiin olevan tehokas ja kiinnostava työkalu arviointikäytäntöjen ja innovaatiopolitiikan kehittämiseen, sikäli kun datan saatavuusongelmat saadaan ratkaistua sekä itse mallia tarkennettua.
Resumo:
Seudullinen innovaatio on monimutkainen ilmiö, joka usein sijaitsee paikallisten toimijoiden keskinäisen vuorovaikutuksen kentässä. Täten sitä on perinteisesti pidetty vaikeasti mitattavana ilmiönä. Työssä sovellettiin Data Envelopment Analysis menetelmää, joka on osoittautunut aiemmin menestyksekkääksi tapauksissa, joissa mitattavien syötteiden ja tuotteiden väliset suhteet eivät ole olleet ilmeisiä. Työssä luotiin konseptuaalinen malli seudullisen innovaation syötteistä ja tuotteista, jonka perusteella valittiin 12 tilastollisen muuttujan mittaristo. Käyttäen Eurostat:ia datalähteenä, lähdedata kahdeksaan muuttujsta saatiin seudullisella tasolla, sekä mittaristoa täydennettiin yhdellä kansallisella muuttujalla. Arviointi suoritettiin lopulta 45 eurooppalaiselle seudulle. Tutkimuksen painopiste oli arvioida DEA-menetelmän soveltuvuutta innovaatiojärjestelmän mittaamiseen, sillä menetelmää ei ole aiemmin sovellettu vastaavassa tapauksessa. Ensimmäiset tulokset osoittivat ylipäätään liiallisen korkeita tehokkuuslukuja. Korjaustoimenpiteitä erottelutarkkuuden parantamiseksi esiteltiin ja sovellettiin, jonka jälkeen saatiin realistisempia tuloksia ja ranking-lista arvioitavista seuduista. DEA-menetelmän todettiin olevan tehokas ja kiinnostava työkalu arviointikäytäntöjen ja innovaatiopolitiikan kehittämiseen, sikäli kun datan saatavuusongelmat saadaan ratkaistua sekä itse mallia tarkennettua.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).
Resumo:
The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.
Resumo:
The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods— Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model