926 resultados para Rough Kernels
Resumo:
AMS subject classification: Primary 34A60, Secondary 49K24.
Resumo:
We propose a family of attributed graph kernels based on mutual information measures, i.e., the Jensen-Tsallis (JT) q-differences (for q ∈ [1,2]) between probability distributions over the graphs. To this end, we first assign a probability to each vertex of the graph through a continuous-time quantum walk (CTQW). We then adopt the tree-index approach [1] to strengthen the original vertex labels, and we show how the CTQW can induce a probability distribution over these strengthened labels. We show that our JT kernel (for q = 1) overcomes the shortcoming of discarding non-isomorphic substructures arising in the R-convolution kernels. Moreover, we prove that the proposed JT kernels generalize the Jensen-Shannon graph kernel [2] (for q = 1) and the classical subtree kernel [3] (for q = 2), respectively. Experimental evaluations demonstrate the effectiveness and efficiency of the JT kernels.
Resumo:
Acknowledgements V.B., N.K.G., and E.A. contributed with conception and experimental design. V.B. performed the experiments. V.B., R.H., A.G., and R.M.M. carried out analysis and interpretation of data. V.B., R.H., A.G., and E.A. wrote the manuscript. V.B. and R.H. contributed equally to this work. V.B. acknowledges funding by SPP 1420 of the German Science Foundation DFG. E.A., N.K.G., and R.H. acknowledge funding from the European Research Council under the European Union/ERC Advanced Grant “Switch2Stick,” Agreement No. 340929.
Resumo:
Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.
Resumo:
Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.
Resumo:
Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.
Resumo:
The spirit behind the creation of the task force is one of good government. It rests upon the basic premise that taxpayers demand the best service possible for their tax dollars. Combine this demand for efficiency with Iowa's aging roadway system, and a projected increase in the state's vehicle miles traveled, the need to examine cost savings becomes apparent. Beyond the rational for good and efficient government, however, is a major concern for potential future reductions in Federal highway funds. Iowa is likely entering a period of needing an expanded transportation system with at best a static capacity for maintenance and construction.
Resumo:
Scale not given.
Resumo:
Shows White House and Treasury buildings and grounds in the condition of 1808 or later.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.
Resumo:
In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency.
Resumo:
In this work we study an Hammerstein generalized integral equation u(t)=∫_{-∞}^{+∞}k(t,s) f(s,u(s),u′(s),...,u^{(m)}(s))ds, where k:ℝ²→ℝ is a W^{m,∞}(ℝ²), m∈ℕ, kernel function and f:ℝ^{m+2}→ℝ is a L¹-Carathéodory function. To the best of our knowledge, this paper is the first one to consider discontinuous nonlinearities with derivatives dependence, without monotone or asymptotic assumptions, on the whole real line. Our method is applied to a fourth order nonlinear boundary value problem, which models moderately large deflections of infinite nonlinear beams resting on elastic foundations under localized external loads.