927 resultados para Sum rules


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suppose that we are interested in establishing simple, but reliable rules for predicting future t-year survivors via censored regression models. In this article, we present inference procedures for evaluating such binary classification rules based on various prediction precision measures quantified by the overall misclassification rate, sensitivity and specificity, and positive and negative predictive values. Specifically, under various working models we derive consistent estimators for the above measures via substitution and cross validation estimation procedures. Furthermore, we provide large sample approximations to the distributions of these nonsmooth estimators without assuming that the working model is correctly specified. Confidence intervals, for example, for the difference of the precision measures between two competing rules can then be constructed. All the proposals are illustrated with two real examples and their finite sample properties are evaluated via a simulation study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://digitalcommons.mtu.edu/copper_range/1000/thumbnail.jpg

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Logic of Proofs~LP, introduced by Artemov, encodes the same reasoning as the modal logic~S4 using proofs explicitly present in the language. In particular, Artemov showed that three operations on proofs (application~$\cdot$, positive introspection~!, and sum~+) are sufficient to mimic provability concealed in S4~modality. While the first two operations go back to G{\"o}del, the exact role of~+ remained somewhat unclear. In particular, it was not known whether the other two operations are sufficient by themselves. We provide a positive answer to this question under a very weak restriction on the axiomatization of LP.