802 resultados para Learning with noise
Resumo:
We discuss a formulation for active example selection for function learning problems. This formulation is obtained by adapting Fedorov's optimal experiment design to the learning problem. We specifically show how to analytically derive example selection algorithms for certain well defined function classes. We then explore the behavior and sample complexity of such active learning algorithms. Finally, we view object detection as a special case of function learning and show how our formulation reduces to a useful heuristic to choose examples to reduce the generalization error.
Resumo:
Se incluye un CD-ROM con el documento original. Obtuvo la cuarta menci??n de la modalidad A en el XII Certamen de Materiales Curriculares de 2004, organizado por la Consejer??a de Educaci??n de la Comunidad de Madrid
Resumo:
Ofrece una mirada académica a las importantes repercusiones que ha tenido la tecnología digital como herramienta para la enseñanza de las lenguas extranjeras y cuya utilización ha sido reconocida en las últimas décadas por las políticas educativas en todo el mundo. Las tecnologías de la información y la comunicación, desde Powerpoint a Internet, han creado nuevas oportunidades de aprendizaje e introducido nuevos elementos en el proceso cognitivo de aprendizaje de estas lenguas.
Resumo:
Se examinan las formas en que pueden utilizarse las TIC en el aula para mejorar la enseñanza y el aprendizaje en diferentes contextos y en las diversas asignaturas. Los autores explican por qué el proceso de integración de las TIC no es sencillo; discuten si el hardware y la infraestructura son suficientes para garantizar la plena integración y aprovechamiento de la inversión en TIC; destacan el papel fundamental que desempeñan los docentes en apoyar el aprendizaje de las TIC en el currículo; argumentan que los profesores necesitan una mayor comprensión de cómo incluir las TIC en la enseñanza y el aprendizaje; consideran qué tipo de desarrollo profesional es más eficaz para apoyar a los profesores a utilizar las tecnologías de forma creativa y productiva. Los estudio de casos ilustran las principales cuestiones y elaboran una serie de ideas teóricas que se pueden utilizar en el aula.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
This research paper reports the findings from an international survey of fieldwork practitioners on their use of technology to enhance fieldwork teaching and learning. It was found that there was high information technology usage before and after time in the field, but some were also using portable devices such as smartphones and global positioning system whilst out in the field. The main pedagogic reasons cited for the use of technology were the need for efficient data processing and to develop students' technological skills. The influencing factors and barriers to the use of technology as well as the importance of emerging technologies are discussed.
Resumo:
Biochemical computing is an emerging field of unconventional computing that attempts to process information with biomolecules and biological objects using digital logic. In this work we survey filtering in general, in biochemical computing, and summarize the experimental realization of an and logic gate with sigmoid response in one of the inputs. The logic gate is realized with electrode-immobilized glucose-6-phosphate dehydrogenase enzyme that catalyzes a reaction corresponding to the Boolean and functions. A kinetic model is also developed and used to evaluate the extent to which the performance of the experimentally realized logic gate is close to optimal.
Resumo:
Concept drift is a problem of increasing importance in machine learning and data mining. Data sets under analysis are no longer only static databases, but also data streams in which concepts and data distributions may not be stable over time. However, most learning algorithms produced so far are based on the assumption that data comes from a fixed distribution, so they are not suitable to handle concept drifts. Moreover, some concept drifts applications requires fast response, which means an algorithm must always be (re) trained with the latest available data. But the process of labeling data is usually expensive and/or time consuming when compared to unlabeled data acquisition, thus only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are also based on the assumption that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenge in machine learning. Recently, a particle competition and cooperation approach was used to realize graph-based semi-supervised learning from static data. In this paper, we extend that approach to handle data streams and concept drift. The result is a passive algorithm using a single classifier, which naturally adapts to concept changes, without any explicit drift detection mechanism. Its built-in mechanisms provide a natural way of learning from new data, gradually forgetting older knowledge as older labeled data items became less influent on the classification of newer data items. Some computer simulation are presented, showing the effectiveness of the proposed method.
Resumo:
Concept drift, which refers to non stationary learning problems over time, has increasing importance in machine learning and data mining. Many concept drift applications require fast response, which means an algorithm must always be (re)trained with the latest available data. But the process of data labeling is usually expensive and/or time consuming when compared to acquisition of unlabeled data, thus usually only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are based on assumptions that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenging task in machine learning. Recently, a particle competition and cooperation approach has been developed to realize graph-based semi-supervised learning from static data. We have extend that approach to handle data streams and concept drift. The result is a passive algorithm which uses a single classifier approach, naturally adapted to concept changes without any explicit drift detection mechanism. It has built-in mechanisms that provide a natural way of learning from new data, gradually "forgetting" older knowledge as older data items are no longer useful for the classification of newer data items. The proposed algorithm is applied to the KDD Cup 1999 Data of network intrusion, showing its effectiveness.
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
Resumo:
AB A fundamental capacity of the human brain is to learn relations (contingencies) between environmental stimuli and the consequences of their occurrence. Some contingencies are probabilistic; that is, they predict an event in some situations but not in all. Animal studies suggest that damage to limbic structures or the prefrontal cortex may disturb probabilistic learning. The authors studied the learning of probabilistic contingencies in amnesic patients with limbic lesions, patients with prefrontal cortex damage, and healthy controls. Across 120 trials, participants learned contingent relations between spatial sequences and a button press. Amnesic patients had learning comparable to that of control subjects but failed to indicate what they had learned. Across the last 60 trials, amnesic patients and control subjects learned to avoid a noncontingent choice better than frontal patients. These results indicate that probabilistic learning does not depend on the brain structures supporting declarative memory.