218 resultados para Sobolev Embeddings
Resumo:
The aim of this paper is to present a new class of smoothness testing strategies in the context of hp-adaptive refinements based on continuous Sobolev embeddings. In addition to deriving a modified form of the 1d smoothness indicators introduced in [26], they will be extended and applied to a higher dimensional framework. A few numerical experiments in the context of the hp-adaptive FEM for a linear elliptic PDE will be performed.
Resumo:
We prove the approximate controllability of the semilinear heat equation in RN, when the nonlinear term is globally Lipschitz and depends both on the state u and its spatial gradient Ñu. The approximate controllability is viewed as the limit of a sequence of optimal control problems. In order to avoid the difficulties related to the lack of compactness of the Sobolev embeddings, we work with the similarity variables and use weighted Sobolev spaces.
Resumo:
In this paper we study the following p(x)-Laplacian problem: -div(a(x)&VERBAR;&DEL; u&VERBAR;(p(x)-2)&DEL; u)+b(x)&VERBAR; u&VERBAR;(p(x)-2)u = f(x, u), x ε &UOmega;, u = 0, on &PARTIAL; &UOmega;, where 1< p(1) &LE; p(x) &LE; p(2) < n, &UOmega; &SUB; R-n is a bounded domain and applying the mountain pass theorem we obtain the existence of solutions in W-0(1,p(x)) for the p(x)-Laplacian problems in the superlinear and sublinear cases. © 2004 Elsevier Inc. All rights reserved.
Resumo:
This study investigates the use of unsupervised features derived from word embedding approaches and novel sequence representation approaches for improving clinical information extraction systems. Our results corroborate previous findings that indicate that the use of word embeddings significantly improve the effectiveness of concept extraction models; however, we further determine the influence that the corpora used to generate such features have. We also demonstrate the promise of sequence-based unsupervised features for further improving concept extraction.
Resumo:
Recent advances in neural language models have contributed new methods for learning distributed vector representations of words (also called word embeddings). Two such methods are the continuous bag-of-words model and the skipgram model. These methods have been shown to produce embeddings that capture higher order relationships between words that are highly effective in natural language processing tasks involving the use of word similarity and word analogy. Despite these promising results, there has been little analysis of the use of these word embeddings for retrieval. Motivated by these observations, in this paper, we set out to determine how these word embeddings can be used within a retrieval model and what the benefit might be. To this aim, we use neural word embeddings within the well known translation language model for information retrieval. This language model captures implicit semantic relations between the words in queries and those in relevant documents, thus producing more accurate estimations of document relevance. The word embeddings used to estimate neural language models produce translations that differ from previous translation language model approaches; differences that deliver improvements in retrieval effectiveness. The models are robust to choices made in building word embeddings and, even more so, our results show that embeddings do not even need to be produced from the same corpus being used for retrieval.
Resumo:
Let Wm,p denote the Sobolev space of functions on Image n whose distributional derivatives of order up to m lie in Lp(Image n) for 1 less-than-or-equals, slant p less-than-or-equals, slant ∞. When 1 < p < ∞, it is known that the multipliers on Wm,p are the same as those on Lp. This result is true for p = 1 only if n = 1. For, we prove that the integrable distributions of order less-than-or-equals, slant1 whose first order derivatives are also integrable of order less-than-or-equals, slant1, belong to the class of multipliers on Wm,1 and there are such distributions which are not bounded measures. These distributions are also multipliers on Lp, for 1 < p < ∞. Moreover, they form exactly the multiplier space of a certain Segal algebra. We have also proved that the multipliers on Wm,l are necessarily integrable distributions of order less-than-or-equals, slant1 or less-than-or-equals, slant2 accordingly as m is odd or even. We have obtained the multipliers from L1(Image n) into Wm,p, 1 less-than-or-equals, slant p less-than-or-equals, slant ∞, and the multiplier space of Wm,1 is realised as a dual space of certain continuous functions on Image n which vanish at infinity.
Resumo:
This thesis consists of three articles on Orlicz-Sobolev capacities. Capacity is a set function which gives information of the size of sets. Capacity is useful concept in the study of partial differential equations, and generalizations of exponential-type inequalities and Lebesgue point theory, and other topics related to weakly differentiable functions such as functions belonging to some Sobolev space or Orlicz-Sobolev space. In this thesis it is assumed that the defining function of the Orlicz-Sobolev space, the Young function, satisfies certain growth conditions. In the first article, the null sets of two different versions of Orlicz-Sobolev capacity are studied. Sufficient conditions are given so that these two versions of capacity have the same null sets. The importance of having information about null sets lies in the fact that the sets of capacity zero play similar role in the Orlicz-Sobolev space setting as the sets of measure zero do in the Lebesgue space and Orlicz space setting. The second article continues the work of the first article. In this article, it is shown that if a Young function satisfies certain conditions, then two versions of Orlicz-Sobolev capacity have the same null sets for its complementary Young function. In the third article the metric properties of Orlicz-Sobolev capacities are studied. It is usually difficult or impossible to calculate a capacity of a set. In applications it is often useful to have estimates for the Orlicz-Sobolev capacities of balls. Such estimates are obtained in this paper, when the Young function satisfies some growth conditions.
Resumo:
The images of Hermite and Laguerre-Sobolev spaces under the Hermite and special Hermite semigroups (respectively) are characterized. These are used to characterize the image of Schwartz class of rapidly decreasing functions f on R-n and C-n under these semigroups. The image of the space of tempered distributions is also considered and a Paley-Wiener theorem for the windowed (short-time) Fourier transform is proved.
Resumo:
The aim of this paper is to obtain certain characterizations for the image of a Sobolev space on the Heisenberg group under the heat kernel transform. We give three types of characterizations for the image of a Sobolev space of positive order H-m (H-n), m is an element of N-n, under the heat kernel transform on H-n, using direct sum and direct integral of Bergmann spaces and certain unitary representations of H-n which can be realized on the Hilbert space of Hilbert-Schmidt operators on L-2 (R-n). We also show that the image of Sobolev space of negative order H-s (H-n), s(> 0) is an element of R is a direct sum of two weighted Bergman spaces. Finally, we try to obtain some pointwise estimates for the functions in the image of Schwartz class on H-n under the heat kernel transform. (C) 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
BoostMap is a recently proposed method for efficient approximate nearest neighbor retrieval in arbitrary non-Euclidean spaces with computationally expensive and possibly non-metric distance measures. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. The key idea is formulating embedding construction as a machine learning task, where AdaBoost is used to combine simple, 1D embeddings into a multidimensional embedding that preserves a large amount of the proximity structure of the original space. This paper demonstrates that, using the machine learning formulation of BoostMap, we can optimize embeddings for indexing and classification, in ways that are not possible with existing alternatives for constructive embeddings, and without additional costs in retrieval time. First, we show how to construct embeddings that are query-sensitive, in the sense that they yield a different distance measure for different queries, so as to improve nearest neighbor retrieval accuracy for each query. Second, we show how to optimize embeddings for nearest neighbor classification tasks, by tuning them to approximate a parameter space distance measure, instead of the original feature-based distance measure.