984 resultados para Confidence


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows the extraordinary capacity of yield spreads to anticipate consumption growth as proxy by the Economic Sentiment Indicator elaborated by the European Commission in order to predict turning points in business cycles. This new evidence complements the well known results regarding the usefulness of the slope of the term structure of interest rates to predict real economic conditions and, in particular, recessions by using a direct measure of expectations. A linear combination of European yield spreads explains a surprising 93.7% of the variability of the Economic Sentiment Indicator. Yield spreads seem to be a key determinant of consumer confidence in Europe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

World Conference on Psychology and Sociology 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovation is a critical factor in ensuring commercial success within the area of medical technology. Biotechnology and Healthcare developments require huge financial and resource investment, in-depth research and clinical trials. Consequently, these developments involve a complex multidisciplinary structure, which is inherently full of risks and uncertainty. In this context, early technology assessment and 'proof of concept' is often sporadic and unstructured. Existing methodologies for managing the feasibility stage of medical device development are predominantly suited to the later phases of development and favour detail in optimisation, validation and regulatory approval. During these early phases, feasibility studies are normally conducted to establish whether technology is potentially viable. However, it is not clear how this technology viability is currently measured. This paper aims to redress this gap through the development of a technology confidence scale, as appropriate explicitly to the feasibility phase of medical device design. These guidelines were developed from analysis of three recent innovation studies within the medical device industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obtaining accurate confidence measures for automatic speech recognition (ASR) transcriptions is an important task which stands to benefit from the use of multiple information sources. This paper investigates the application of conditional random field (CRF) models as a principled technique for combining multiple features from such sources. A novel method for combining suitably defined features is presented, allowing for confidence annotation using lattice-based features of hypotheses other than the lattice 1-best. The resulting framework is applied to different stages of a state-of-the-art large vocabulary speech recognition pipeline, and consistent improvements are shown over a sophisticated baseline system. Copyright © 2011 ISCA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Psychological factors play a major role in exacerbating chronic pain. Effective self-management of pain is often hindered by inaccurate beliefs about the nature of pain which lead to a high degree of emotional reactivity. Probabilistic models of perception state that greater confidence (certainty) in beliefs increases their influence on perception and behavior. In this study, we treat confidence as a metacognitive process dissociable from the content of belief. We hypothesized that confidence is associated with anticipatory activation of areas of the pain matrix involved with top-down modulation of pain. Healthy volunteers rated their beliefs about the emotional distress that experimental pain would cause, and separately rated their level of confidence in this belief. Confidence predicted the influence of anticipation cues on experienced pain. We measured brain activity during anticipation of pain using high-density EEG and used electromagnetic tomography to determine neural substrates of this effect. Confidence correlated with activity in right anterior insula, posterior midcingulate and inferior parietal cortices during the anticipation of pain. Activity in the right anterior insula predicted a greater influence of anticipation cues on pain perception, whereas activity in right inferior parietal cortex predicted a decreased influence of anticipatory cues. The results support probabilistic models of pain perception and suggest that confidence in beliefs is an important determinant of expectancy effects on pain perception.