919 resultados para Analytic Reproducing Kernel
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
A presente dissertação visa retratar a exploração do suporte do protocolo internet versão seis (IPv6) no kernel do Linux, conjuntamente com a análise detalhada do estado de implementação dos diferentes aspectos em que se baseia o protocolo. O estudo incide na experimentação do funcionamento em geral do stack, a identificação de inconsistências deste em relação RFC’s respectivos, bem como a simulação laboratorial de cenários que reproduzam casos de utilização de cada uma das facilidades analisadas. O objecto desta dissertação não é explicar o funcionamento do novo protocolo IPv6, mas antes, centrar-se essencialmente na exploração do IPv6 no kernel do Linux. Não é um documento para leigos em IPv6, no entanto, optou-se por desenvolver uma parte inicial onde é abordado o essencial do protocolo: a sua evolução até à aprovação e a sua especificação. Com base no estudo realizado, explora-se o suporte do IPv6 no kernel do Linux, fazendo uma análise detalhada do estudo de implementação dos diferentes aspectos em que se baseia o protocolo. Bem como a realização de testes de conformidade IPv6 em relação aos RFC’s.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
In the fixed design regression model, additional weights areconsidered for the Nadaraya--Watson and Gasser--M\"uller kernel estimators.We study their asymptotic behavior and the relationships between new andclassical estimators. For a simple family of weights, and considering theIMSE as global loss criterion, we show some possible theoretical advantages.An empirical study illustrates the performance of the weighted estimatorsin finite samples.
Resumo:
Recent research has highlighted the notion that people can make judgmentsand choices by means of two systems that are labeled here tacit(or intuitive) and deliberate (or analytic). Whereas most decisionstypically involve both systems, this chapter examines the conditions underwhich each system is liable to be more effective. This aims to illuminatethe age-old issue of whether and when people should trust intuition or analysis. To do this, a framework is presented to understand how thetacit and deliberate systems work in tandem. Distinctions are also madebetween the types of information typically used by both systems as wellas the characteristics of environments that facilitate or hinder accuratelearning by the tacit system. Next, several experiments that havecontrasted intuitive and analytic modes on the same tasks are reviewed.Together, the theoretical framework and experimental evidence leads tospecifying the trade-off that characterizes their relative effectiveness.Tacit system responses can be subject to biases. In making deliberate systemresponses, however, people might not be aware of the correct rule to dealwith the task they are facing and/or make errors in executing it. Whethertacit or deliberate responses are more valid in particular circumstancesrequires assessing this trade-off. In this, the probability of making errorsin deliberate thought is postulated to be a function of the analytical complexityof the task as perceived by the person. Thus the trade-off is one of bias (inimplicit responses) versus analytical complexity (when tasks are handled indeliberate mode). Finally, it is noted that whereas much attention has beenpaid in the past to helping people make decisions in deliberate mode, effortsshould also be directed toward improving ability to make decisions intacit mode since the effectiveness of decisions clearly depends on both. Thistherefore represents an important frontier for research.
Resumo:
Let I be an ideal in a local Cohen-Macaulay ring (A, m). Assume I to be generically a complete intersection of positive height. We compute the depth of the Rees algebra and the form ring of I when the analytic deviation of I equals one and its reduction number is also at most one. The formu- las we obtain coincide with the already known formulas for almost complete intersection ideals.
Resumo:
Variables measured during static and dynamic pupillometry were factor-analyzed. Following factors were obtained regardless whether investigations were carried out in normals or in psychiatric patients: A static factor, a dynamic factor, a stimulus-specific factor and a restitution-dependent factor. Evaluation of reliability in normals demonstrated a high reliability for the static variables of pupillometry.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
En aquest treball demostrem que en la classe de jocs d'assignació amb diagonal dominant (Solymosi i Raghavan, 2001), el repartiment de Thompson (que coincideix amb el valor tau) és l'únic punt del core que és maximal respecte de la relació de dominància de Lorenz, i a més coincideix amb la solucié de Dutta i Ray (1989), també coneguda com solució igualitària. En segon lloc, mitjançant una condició més forta que la de diagonal dominant, introduïm una nova classe de jocs d'assignació on cada agent obté amb la seva parella òptima almenys el doble que amb qualsevol altra parella. Per aquests jocs d'assignació amb diagonal 2-dominant, el repartiment de Thompson és l'únic punt del kernel, i per tant el nucleolo.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
The complex relationship between structural and functional connectivity, as measured by noninvasive imaging of the human brain, poses many unresolved challenges and open questions. Here, we apply analytic measures of network communication to the structural connectivity of the human brain and explore the capacity of these measures to predict resting-state functional connectivity across three independently acquired datasets. We focus on the layout of shortest paths across the network and on two communication measures-search information and path transitivity-which account for how these paths are embedded in the rest of the network. Search information is an existing measure of information needed to access or trace shortest paths; we introduce path transitivity to measure the density of local detours along the shortest path. We find that both search information and path transitivity predict the strength of functional connectivity among both connected and unconnected node pairs. They do so at levels that match or significantly exceed path length measures, Euclidean distance, as well as computational models of neural dynamics. This capacity suggests that dynamic couplings due to interactions among neural elements in brain networks are substantially influenced by the broader network context adjacent to the shortest communication pathways.