40 resultados para INTEGRABLE GENERALIZATION

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that there always exists an interval of tuning parameter values such that the corresponding mean squared prediction error for the lasso estimator is smaller than for the ordinary least squares estimator. For an estimator satisfying some condition such as unbiasedness, the paper defines the corresponding generalized lasso estimator. Its mean squared prediction error is shown to be smaller than that of the estimator for values of the tuning parameter in some interval. This implies that all unbiased estimators are not admissible. Simulation results for five models support the theoretical results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data pre-processing always plays a key role in learning algorithm performance. In this research we consider data pre-processing by normalization for Support Vector Machines (SVMs). We examine the normalization affect across 112 classification problems with SVM using the rbf kernel. We observe a significant classification improvement due to normalization. Finally we suggest a rule based method to find when normalization is necessary for a specific classification problem. The best normalization method is also automatically selected by SVM itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a novel framework of nonlinear modelling to adaptively fit the complexity of the model to the problem domain resulting in a better modelling capability and a straightforward knowledge acquisition. The developed framework also permits increased comprehensibility and user acceptability of modelling results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a simple conceptualization of generalization, called other-settings generalization, that is valid for any IS researcher who claims that his or her results have applicability beyond the sample where data were collected. An other-settings generalization is the researcher’s act of arguing, based on the representativeness of the sample, that there is a reasonable expectation that a knowledge claim already believed to be true in one or more settings is also true in other clearly defined settings. Features associated with this conceptualization of generalization include (a) recognition that all human knowledge is bounded, (b) recognition that all knowledge claims—including generalizations—are subject to revision, (c) an ontological assumption that objective reality exists, (d) a scientific-realist definition of truth, and (e) identification of the following three essential characteristics of sound other-settings generalizations: (1) the researcher must clearly define the larger set of things to which the generalization applies; (2) the justification for making other-settings generalizations ultimately depends on the representativeness of the sample, not statistical inference; (3) representativeness is judged by comparing key characteristics of the proposition being generalized in the sample and target population. The paper concludes with the recommendation that future empirical IS research should include an explicit discussion of the other-settings generalizability of research findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a framework for justifying generalization in information systems (IS) research. First, using evidence from an analysis of two leading IS journals, we show that the treatment of generalization in many empirical papers in leading IS research journals is unsatisfactory. Many quantitative studies need clearer definition of populations and more discussion of the extent to which ‘significant’ statistics and use of non-probability sampling affect support for their knowledge claims. Many qualitative studies need more discussion of boundary conditions for their sample-based general knowledge claims. Second, the proposed new framework is presented. It defines eight alternative logical pathways for justifying generalizations in IS research. Three key concepts underpinning the framework are the need for researcher judgment when making any claim about the likely truth of sample-based knowledge claims in other settings; the importance of sample representativeness and its assessment in terms of the knowledge claim of interest; and the desirability of integrating a study’s general knowledge claims with those from prior research. Finally, we show how the framework may be applied by researchers and reviewers. Observing the pathways in the framework has potential to improve both research rigour and practical relevance for IS research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mean defined by Bonferroni in 1950 (known by the same name) averages all non-identical product pairs of the inputs. Its generalizations to date have been able to capture unique behavior that may be desired in some decision-making contexts such as the ability to model mandatory requirements. In this paper, we propose a composition that averages conjunctions between the respective means of a designated subset-size partition. We investigate the behavior of such a function and note the relationship within a given family as the subset size is changed. We found that the proposed function is able to more intuitively handle multiple mandatory requirements or mandatory input sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This edited two-volume collection presents the most interesting and compelling articles pertaining to the formulation of research methods used to study information systems from the 30-year publication history of the Journal of Information Technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop an approach to jump codes concentrating on their combinatorial and symmetry properties. The main result is a generalization of a theorem previously proved in the context of isodual codes. We show that several previously constructed jump codes are instances of this theorem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the Cardinality Constrained Quadratic Knapsack Problem (QKP) and the Quadratic Selective Travelling Salesman Problem (QSTSP). The QKP is a generalization of the Knapsack Problem and the QSTSP is a generalization of the Travelling Salesman Problem. Thus, both problems are NP hard. The QSTSP and the QKP can be solved using branch-and-cut methods. Good bounds can be obtained if strong constraints are used. Hence it is important to identify strong or even facet-defining constraints. This paper studies the polyhedral combinatorics of the QSTSP and the QKP, i.e. amongst others we identify facet-defining constraints for the QSTSP and the QKP, and provide mathematical proofs that they do indeed define facets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compared with conventional two-class learning schemes, one-class classification simply uses a single class in the classifier training phase. Applying one-class classification to learn from unbalanced data set is regarded as the recognition based learning and has shown to have the potential of achieving better performance. Similar to twoclass learning, parameter selection is a significant issue, especially when the classifier is sensitive to the parameters. For one-class learning scheme with the kernel function, such as one-class Support Vector Machine and Support Vector Data Description, besides the parameters involved in the kernel, there is another one-class specific parameter: the rejection rate v. In this paper, we proposed a general framework to involve the majority class in solving the parameter selection problem. In this framework, we first use the minority target class for training in the one-class classification stage; then we use both minority and majority class for estimating the generalization performance of the constructed classifier. This generalization performance is set as the optimization criteria. We employed the Grid search and Experiment Design search to attain various parameter settings. Experiments on UCI and Reuters text data show that the parameter optimized one-class classifiers outperform all the standard one-class learning schemes we examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main problems with Artificial Neural Networks (ANNs) is that their results are not intuitively clear. For example, commonly used hidden neurons with sigmoid activation function can approximate any continuous function, including linear functions, but the coefficients (weights) of this approximation are rather meaningless. To address this problem, current paper presents a novel kind of a neural network that uses transfer functions of various complexities in contrast to mono-transfer functions used in sigmoid and hyperbolic tangent networks. The presence of transfer functions of various complexities in a Mixed Transfer Functions Artificial Neural Network (MTFANN) allow easy conversion of the full model into user-friendly equation format (similar to that of linear regression) without any pruning or simplification of the model. At the same time, MTFANN maintains similar generalization ability to mono-transfer function networks in a global optimization context. The performance and knowledge extraction of MTFANN were evaluated on a realistic simulation of the Puma 560 robot arm and compared to sigmoid, hyperbolic tangent, linear and sinusoidal networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compared with conventional two-class learning schemes, one-class classification simply uses a single class for training purposes. Applying one-class classification to the minorities in an imbalanced data has been shown to achieve better performance than the two-class one. In this paper, in order to make the best use of all the available information during the learning procedure, we propose a general framework which first uses the minority class for training in the one-class classification stage; and then uses both minority and majority class for estimating the generalization performance of the constructed classifier. Based upon this generalization performance measurement, parameter search algorithm selects the best parameter settings for this classifier. Experiments on UCI and Reuters text data show that one-class SVM embedded in this framework achieves much better performance than the standard one-class SVM alone and other learning schemes, such as one-class Naive Bayes, one-class nearest neighbour and neural network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate prediction of the roll separating force is critical to assuring the quality of the final product in steel manufacturing. This paper presents an ensemble model that addresses these concerns. A stacked generalisation approach to ensemble modeling is used with two sets of the ensemble model members, the first set being learnt from the current input-output data of the hot rolling finishing mill, while another uses the available information on the previous coil in addition to the current information. Both sets of ensemble members include linear regression, multilayer perceptron, and k-nearest neighbor algorithms. A competitive selection model (multilayer perceptron) is then used to select the output from one of the ensemble members to be the final output of the ensemble model. The ensemble model created by such a stacked generalization is able to achieve extremely high accuracy in predicting the roll separation force with the average relative accuracy being within 1% of the actual measured roll force.