919 resultados para Naive Bayes classifier


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on children's naive concepts has previously tended to focus on the domains of physics and psychology, but more recently attention has turned to conceptual development in biology as a core domain of knowledge. Because of its familiarity, illness has been a popular topic for researchers in this domain. However, they have only studied the children’s understanding of its causes. Other aspects of illness, such as treatment and prognosis, have received little attention. This research addresses the development of 5- to 9-year-old children’s understanding of the causes of illness and their probabilities via open-ended and forced choice interviews. The results of this research are: 1) Most of the 5- to 7-year-old children used behavioral causes to explain illness, and the 9-year-old children primarily used biological causes to interpret illness. With age, more and more children selected psychological causes to explain illness. 2) Pre-school children did not over-generalize contagions to non-contagious illnesses. They used behavioral and biological causes to explain contagious illnesses. For non-contagious illnesses, they chose only behavioral causes. 3) Most of the children used only one kind of cause to explain illness. 4) Some preschool-aged children viewed outcomes of familiar causes of illness as probabilistic. With age, more and more could make uncertain predictions of illness. 5) The children’s understanding of the causes’ probabilities appeared to be based on naïve biology. 5- to 9-year-old children often made probabilistic predictions by analyzing a single cause of illness. 6) Children coming from higher educational backgrounds outperformed their counterparts coming from lower educational backgrounds with respect to understanding illness. 7) Specific knowledge acquired could generally improved the preschoolers’ understanding of causes of illness and their probabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a debate in cognitive development theory on whether cognitive development is general or specific. More and more researchers think that cognitive development is domain specific. People start to investigate preschoolers' native theory of human being's basic knowledge systems. Naive biology is one of the core domains. But there is argument whether there is separate native biological concepts among preschoolers. The research examined preschoolers' cognitive development of naive biological theory on two levels which is "growth" and "aliveness", and it also examined individual difference and factors that lead to the difference. Three studies were designed. Study 1 was to study preschoolers' cognition on growth, which is a basic trait of living things, and whether children can distinguish living and non-living things with the trait and understanding the causality. Study 2 was to investigate preschoolers' distinction between living things and non-living things from an integrated level. Study 3 was to investigate how children make inferences to unfamiliar things with their domain specific knowledge. The results showed the following: 1. Preschoolers gradually developed naive theory of biology on growth level, but their naive theory on integrated level has not developed. 2 Preschoolers' naive theory of biology is not "all or none", 4- and 5-year-old children showed some distinction between living and non-living things to some extent, they use non-intentional reason to explain the cause of growth and their explanation showed coherence. But growth has not been a criteria of ontological distinction of living and non-living things for 4- and 5-year-old children, most 6-year-old children can distinguish between living and non-living things, and these show the developing process of biological cognition. 3. Preschoolers' biological inference is influenced by their domain-specific knowledge, whether they can make inference to new trait of living things depends on whether they have specific knowledge. In the deductive task, children use their knowledge to make inference to unfamiliar things. 4-year-olds use concrete knowledge more often while the 6-year-old use generalized knowledge more frequency. 4. Preschoolers' knowledge grow with age, but individuals' cognitive development speed at different period. Urban and rural educational background affect cognitive performance. As time goes by, the urban-rural knowledge difference to distinguish living and nonliving things reduces. And preschoolers' are at the same developmental stage because the three age groups have similar causal explanation both in quantity and quality. 5. There is intra-individual difference on preschoolers' naive biological cognition. They show different performance on different tasks and domains, and their cognitive development is sequential, they understand growth earlier than they understand "alive", which is an integrated concept. The intra-individual differences decrease with age.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of misconceptions research, previous research was focused mainly on the effect of naive concepts on the learning of scientific concept. In this study, from the viewpoint of declarative and procedural knowledge, conceptual errors on Newtonian mechanics were studied comparatively between high-performance and low-performance students. Furthermore, the effects of self-explain learning strategies and reflective learning on the change of subjects' conceptual errors were explored. The result of experiments indicated: 1. There was significant difference in the number of conceptual errors of declarative and procedural knowledge between high-performance students and low-performance students. And Low-performance students made more conceptual errors of procedural knowledge than that of declarative knowledge. For high-performance students, there was no distinct difference between these two kinds of errors. 2. In the distribution of conceptual errors, most errors of declarative knowledge were mainly focused on the understanding of concepts of friction and acceleration. The errors of procedure knowledge most errors concentrated on the judgment of vector direction and the conceptual understanding. 3. Compared with high-performance students, the representation of conceptual declarative knowledge of low-performance students is less complex, more concrete and context bound. 4. The comparative analysis of problem-solving strategies showed: high-performance students preferred to apply analytic strategy, solving problems based on physical concepts and principles; low-performance students preferred to use context strategy, solving problem according to the literal meaning of problems, subjective and groundless presumption and wrong concepts and principles. 5. Self-explain strategies can help students correct their conceptual errors effectively. Reflective learning could help students to correct the concept errors in some degree, but the distinct effect was not observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a naive propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Binary image classifiction is a problem that has received much attention in recent years. In this paper we evaluate a selection of popular techniques in an effort to find a feature set/ classifier combination which generalizes well to full resolution image data. We then apply that system to images at one-half through one-sixteenth resolution, and consider the corresponding error rates. In addition, we further observe generalization performance as it depends on the number of training images, and lastly, compare the system's best error rates to that of a human performing an identical classification task given teh same set of test images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

3.050 JCR (2013) Q2, 44/125 Cardiac & cardiovascular systems

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper consists of series of suggestions and historical references on the basis of which it would become possible to think and practice „spectator pedagogy” in performing arts. Contemporary performance practices can claim for new kind of political relevance by focusing on the way spectator´s corporeal experience changes during and through theatrical situation. Naive body produced by a performance is also most susceptible for thoroughgoing political and ecological change. This is the first outline by its author on this topic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many real world image analysis problems, such as face recognition and hand pose estimation, involve recognizing a large number of classes of objects or shapes. Large margin methods, such as AdaBoost and Support Vector Machines (SVMs), often provide competitive accuracy rates, but at the cost of evaluating a large number of binary classifiers, thus making it difficult to apply such methods when thousands or millions of classes need to be recognized. This thesis proposes a filter-and-refine framework, whereby, given a test pattern, a small number of candidate classes can be identified efficiently at the filter step, and computationally expensive large margin classifiers are used to evaluate these candidates at the refine step. Two different filtering methods are proposed, ClassMap and OVA-VS (One-vs.-All classification using Vector Search). ClassMap is an embedding-based method, works for both boosted classifiers and SVMs, and tends to map the patterns and their associated classes close to each other in a vector space. OVA-VS maps OVA classifiers and test patterns to vectors based on the weights and outputs of weak classifiers of the boosting scheme. At runtime, finding the strongest-responding OVA classifier becomes a classical vector search problem, where well-known methods can be used to gain efficiency. In our experiments, the proposed methods achieve significant speed-ups, in some cases up to two orders of magnitude, compared to exhaustive evaluation of all OVA classifiers. This was achieved in hand pose recognition and face recognition systems where the number of classes ranges from 535 to 48,600.