40 resultados para INTEGRABLE GENERALIZATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we compare two generative models including Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM) with Support Vector Machine (SVM) classifier for the recognition of six human daily activity (i.e., standing, walking, running, jumping, falling, sitting-down) from a single waist-worn tri-axial accelerometer signals through 4-fold cross-validation and testing on a total of thirteen subjects, achieving an average recognition accuracy of 96.43% and 98.21% in the first experiment and 95.51% and 98.72% in the second, respectively. The results demonstrate that both HMM and GMM are not only able to learn but also capable of generalization while the former outperformed the latter in the recognition of daily activities from a single waist worn tri-axial accelerometer. In addition, these two generative models enable the assessment of human activities based on acceleration signals with varying lengths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article proposes a simple Nash program. Both our axiomatic characterization and our noncooperative procedure consider each distinct asymmetric and symmetric Nash solution. Our noncooperative procedure is a generalization of the simplest known sequential Nash demand game analyzed by Rubinstein etal. (1992). We then provide the simplest known axiomatic characterization of the class of asymmetric Nash solutions, in which we use only Nash's crucial Independence of Irrelevant Alternatives axiom and an asymmetric modification of the well-known Midpoint Domination axiom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A variety of type reduction (TR) algorithms have been proposed for interval type-2 fuzzy logic systems (IT2 FLSs). The focus of existing literature is mainly on computational requirements of TR algorithm. Often researchers give more rewards to computationally less expensive TR algorithms. This paper evaluates and compares five frequently used TR algorithms from a forecasting performance perspective. Algorithms are judged based on the generalization power of IT2 FLS models developed using them. Four synthetic and real world case studies with different levels of uncertainty are considered to examine effects of TR algorithms on forecasts accuracies. It is found that Coupland-Jonh TR algorithm leads to models with a better forecasting performance. However, there is no clear relationship between the width of the type reduced set and TR algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large corpus of data obtained by means of empirical study of neuromuscular adaptation is currently of limited use to athletes and their coaches. One of the reasons lies in the unclear direct practical utility of many individual trials. This paper introduces a mathematical model of adaptation to resistance training, which derives its elements from physiological fundamentals on the one side, and empirical findings on the other. The key element of the proposed model is what is here termed the athlete’s capability profile. This is a generalization of length and velocity dependent force production characteristics of individual muscles, to an exercise with arbitrary biomechanics. The capability profile, a two-dimensional function over the capability plane, plays the central role in the proposed model of the training-adaptation feedback loop. Together with a dynamic model of resistance the capability profile is used in the model’s predictive stage when exercise performance is simulated using a numerical approximation of differential equations of motion. Simulation results are used to infer the adaptational stimulus, which manifests itself through a fed back modification of the capability profile. It is shown how empirical evidence of exercise specificity can be formulated mathematically and integrated in this framework. A detailed description of the proposed model is followed by examples of its application—new insights into the effects of accommodating loading for powerlifting are demonstrated. This is followed by a discussion of the limitations of the proposed model and an overview of avenues for future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work is to recognize faces using video sequences both for training and novel input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. There are three major areas of novelty: (i) illumination generalization is achieved by combining coarse histogram correction with fine illumination manifold-based normalization; (ii) pose robustness is achieved by decomposing each appearance manifold into semantic Gaussian pose clusters, comparing the corresponding clusters and fusing the results using an RBF network; (iii) a fully automatic recognition system based on the proposed method is described and extensively evaluated on 600 head motion video sequences with extreme illumination, pose and motion pattern variation. On this challenging data set our system consistently demonstrated a very high recognition rate (95% on average), significantly outperforming state-of-the-art methods from the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neural network (NN) is a popular artificial intelligence technique for solving complicated problems due to their inherent capabilities. However generalization in NN can be harmed by a number of factors including parameter's initialization, inappropriate network topology and setting parameters of the training process itself. Forecast combinations of NN models have the potential for improved generalization and lower training time. A weighted averaging based on Variance-Covariance method that assigns greater weight to the forecasts producing lower error, instead of equal weights is practiced in this paper. While implementing the method, combination of forecasts is done with all candidate models in one experiment and with the best selected models in another experiment. It is observed during the empirical analysis that forecasting accuracy is improved by combining the best individual NN models. Another finding of this study is that reducing the number of NN models increases the diversity and, hence, accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The methodology of grounded theory has great potential to contribute to our understanding of leadership within particular substantive contexts. However, our notions of good science might constrain these contributions. We argue that for grounded theorists a tension might exist between a desire to create a contextualised theory of leadership and a desire for scientifically justified issues of validity and generalizable theory. We also explore how the outcome of grounded theory research can create a dissonance between theories that resonate with the reality they are designed to explore, and the theories that resonate with a particular yet dominant 'scientific' approach in the field of leadership studies - the philosophy of science commonly known as positivism. We examine the opportunities provided by an alternative philosophy of science, that of critical realism. We explore how conducting grounded theory research informed by critical realism might strengthen researchers' confidence to place emphasis on an understanding and explanation of contextualised leadership as a scientific goal, rather than the scientific goal of generalization through empirical replication. Two published accounts of grounded theory are critiqued candidly to help emphasise our arguments. We conclude by suggesting how critical realism can help shape and enhance grounded theory research into the phenomenon of leadership. © 2010.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Type reduction (TR) is one of the key components of interval type-2 fuzzy logic systems (IT2FLSs). Minimizing the computational requirements has been one of the key design criteria for developing TR algorithms. Often researchers give more rewards to computationally less expensive TR algorithms. This paper evaluates and compares five frequently used TR algorithms based on their contribution to the forecasting performance of IT2FLS models. Algorithms are judged based on the generalization power of IT2FLS models developed using them. Synthetic and real world case studies with different levels of uncertainty are considered to examine effects of TR algorithms on forecasts' accuracies. As per obtained results, Coupland-Jonh TR algorithm leads to models with a higher and more stable forecasting performance. However, there is no obvious and consistent relationship between the widths of the type reduced set and the TR algorithm. © 2013 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Texture classification is one of the most important tasks in computer vision field and it has been extensively investigated in the last several decades. Previous texture classification methods mainly used the template matching based methods such as Support Vector Machine and k-Nearest-Neighbour for classification. Given enough training images the state-of-the-art texture classification methods could achieve very high classification accuracies on some benchmark databases. However, when the number of training images is limited, which usually happens in real-world applications because of the high cost of obtaining labelled data, the classification accuracies of those state-of-the-art methods would deteriorate due to the overfitting effect. In this paper we aim to develop a novel framework that could correctly classify textural images with only a small number of training images. By taking into account the repetition and sparsity property of textures we propose a sparse representation based multi-manifold analysis framework for texture classification from few training images. A set of new training samples are generated from each training image by a scale and spatial pyramid, and then the training samples belonging to each class are modelled by a manifold based on sparse representation. We learn a dictionary of sparse representation and a projection matrix for each class and classify the test images based on the projected reconstruction errors. The framework provides a more compact model than the template matching based texture classification methods, and mitigates the overfitting effect. Experimental results show that the proposed method could achieve reasonably high generalization capability even with as few as 3 training images, and significantly outperforms the state-of-the-art texture classification approaches on three benchmark datasets. © 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Adaptive Multiple-hyperplane Machine (AMM) was recently proposed to deal with large-scale datasets. However, it has no principle to tune the complexity and sparsity levels of the solution. Addressing the sparsity is important to improve learning generalization, prediction accuracy and computational speedup. In this paper, we employ the max-margin principle and sparse approach to propose a new Sparse AMM (SAMM). We solve the new optimization objective function with stochastic gradient descent (SGD). Besides inheriting the good features of SGD-based learning method and the original AMM, our proposed Sparse AMM provides machinery and flexibility to tune the complexity and sparsity of the solution, making it possible to avoid overfitting and underfitting. We validate our approach on several large benchmark datasets. We show that with the ability to control sparsity, the proposed Sparse AMM yields superior classification accuracy to the original AMM while simultaneously achieving computational speedup.