984 resultados para compressed sensing theory (CS)
Resumo:
Field-based soil moisture measurements are cumbersome. Thus, remote sensing techniques are needed because allows field and landscape-scale mapping of soil moisture depth-averaged through the root zone of existing vegetation. The objective of the study was to evaluate the accuracy of an empirical relationship to calculate soil moisture from remote sensing data of irrigated soils of the Apodi Plateau, in the Brazilian semiarid region. The empirical relationship had previously been tested for irrigated soils in Mexico, Egypt, and Pakistan, with promising results. In this study, the relationship was evaluated from experimental data collected from a cotton field. The experiment was carried out in an area of 5 ha with irrigated cotton. The energy balance and evaporative fraction (Λ) were measured by the Bowen ratio method. Soil moisture (θ) data were collected using a PR2 - Profile Probe (Delta-T Devices Ltd). The empirical relationship was tested using experimentally collected Λ and θ values and was applied using the Λ values obtained from the Surface Energy Balance Algorithm for Land (SEBAL) and three TM - Landsat 5 images. There was a close correlation between measured and estimated θ values (p<0.05, R² = 0.84) and there were no significant differences according to the Student t-test (p<0.01). The statistical analyses showed that the empirical relationship can be applied to estimate the root-zone soil moisture of irrigated soils, i.e. when the evaporative fraction is greater than 0.45.
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
A mathematical model that describes the behavior of low-resolution Fresnel lenses encoded in any low-resolution device (e.g., a spatial light modulator) is developed. The effects of low-resolution codification, such the appearance of new secondary lenses, are studied for a general case. General expressions for the phase of these lenses are developed, showing that each lens behaves as if it were encoded through all pixels of the low-resolution device. Simple expressions for the light distribution in the focal plane and its dependence on the encoded focal length are developed and commented on in detail. For a given codification device an optimum focal length is found for best lens performance. An optimization method for codification of a single lens with a short focal length is proposed.
Resumo:
A mathematical model describing the behavior of low-resolution Fresnel encoded lenses (LRFEL's) encoded in any low-resolution device (e.g., a spatial light modulator) has recently been developed. From this model, an LRFEL with a short focal length was optimized by our imposing the maximum intensity of light onto the optical axis. With this model, analytical expressions for the light-amplitude distribution, the diffraction efficiency, and the frequency response of the optimized LRFEL's are derived.
Resumo:
We study the contribution to vacuum decay in field theory due to the interaction between the long- and short-wavelength modes of the field. The field model considered consists of a scalar field of mass M with a cubic term in the potential. The dynamics of the long-wavelength modes becomes diffusive in this interaction. The diffusive behavior is described by the reduced Wigner function that characterizes the state of the long-wavelength modes. This function is obtained from the whole Wigner function by integration of the degrees of freedom of the short-wavelength modes. The dynamical equation for the reduced Wigner function becomes a kind of Fokker-Planck equation which is solved with suitable boundary conditions enforcing an initial metastable vacuum state trapped in the potential well. As a result a finite activation rate is found, even at zero temperature, for the formation of true vacuum bubbles of size M-1. This effect makes a substantial contribution to the total decay rate.
Resumo:
A covariant formalism is developed for describing perturbations on vacuum domain walls and strings. The treatment applies to arbitrary domain walls in (N+1)-dimensional flat spacetime, including the case of bubbles of a true vacuum nucleating in a false vacuum. Straight strings and planar walls in de Sitter space, as well as closed strings and walls nucleating during inflation, are also considered. Perturbations are represented by a scalar field defined on the unperturbed wall or string world sheet. In a number of interesting cases, this field has a tachyonic mass and a nonminimal coupling to the world-sheet curvature.
Resumo:
A systematic time-dependent perturbation scheme for classical canonical systems is developed based on a Wick's theorem for thermal averages of time-ordered products. The occurrence of the derivatives with respect to the canonical variables noted by Martin, Siggia, and Rose implies that two types of Green's functions have to be considered, the propagator and the response function. The diagrams resulting from Wick's theorem are "double graphs" analogous to those introduced by Dyson and also by Kawasaki, in which the response-function lines form a "tree structure" completed by propagator lines. The implication of a fluctuation-dissipation theorem on the self-energies is analyzed and compared with recent results by Deker and Haake.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.
Resumo:
We study the process of vacuum decay in quantum field theory focusing on the stochastic aspects of the interaction between long- and short-wavelength modes. This interaction results in a diffusive behavior of the reduced Wigner function describing the state of long-wavelength modes, and thereby to a finite activation rate even at zero temperature. This effect can make a substantial contribution to the total decay rate.
Resumo:
We consider vacuum solutions in M theory of the form of a five-dimensional Kaluza-Klein black hole cross T6. In a certain limit, these include the five-dimensional neutral rotating black hole (cross T6). From a type-IIA standpoint, these solutions carry D0 and D6 charges. We show that there is a simple D-brane description which precisely reproduces the Hawking-Bekenstein entropy in the extremal limit, even though supersymmetry is completely broken.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
The self-intermediate dynamic structure factor Fs(k,t) of liquid lithium near the melting temperature is calculated by molecular dynamics. The results are compared with the predictions of several theoretical approaches, paying special attention to the Lovesey model and the Wahnstrm and Sjgren mode-coupling theory. To this end the results for the Fs(k,t) second memory function predicted by both models are compared with the ones calculated from the simulations.