846 resultados para Kernel Functions
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
We study steady-state correlation functions of nonlinear stochastic processes driven by external colored noise. We present a methodology that provides explicit expressions of correlation functions approximating simultaneously short- and long-time regimes. The non-Markov nature is reduced to an effective Markovian formulation, and the nonlinearities are treated systematically by means of double expansions in high and low frequencies. We also derive some exact expressions for the coefficients of these expansions for arbitrary noise by means of a generalization of projection-operator techniques.
Resumo:
The intensity correlation functions C(t) for the colored-gain-noise model of dye lasers are analyzed and compared with those for the loss-noise model. For correlation times ¿ larger than the deterministic relaxation time td, we show with the use of the adiabatic approximation that C(t) values coincide for both models. For small correlation times we use a method that provides explicit expressions of non-Markovian correlation functions, approximating simultaneously short- and long-time behaviors. Comparison with numerical simulations shows excellent results simultaneously for short- and long-time regimes. It is found that, when the correlation time of the noise increases, differences between the gain- and loss-noise models tend to disappear. The decay of C(t) for both models can be described by a time scale that approaches the deterministic relaxation time. However, in contrast with the loss-noise model, a secondary time scale remains for large times for the gain-noise model, which could allow one to distinguish between both models.
Resumo:
Pedotransfer functions (PTF) were developed to estimate the parameters (α, n, θr and θs) of the van Genuchten model (1980) to describe soil water retention curves. The data came from various sources, mainly from studies conducted by universities in Northeast Brazil, by the Brazilian Agricultural Research Corporation (Embrapa) and by a corporation for the development of the São Francisco and Parnaíba river basins (Codevasf), totaling 786 retention curves, which were divided into two data sets: 85 % for the development of PTFs, and 15 % for testing and validation, considered independent data. Aside from the development of general PTFs for all soils together, specific PTFs were developed for the soil classes Ultisols, Oxisols, Entisols, and Alfisols by multiple regression techniques, using a stepwise procedure (forward and backward) to select the best predictors. Two types of PTFs were developed: the first included all predictors (soil density, proportions of sand, silt, clay, and organic matter), and the second only the proportions of sand, silt and clay. The evaluation of adequacy of the PTFs was based on the correlation coefficient (R) and Willmott index (d). To evaluate the PTF for the moisture content at specific pressure heads, we used the root mean square error (RMSE). The PTF-predicted retention curve is relatively poor, except for the residual water content. The inclusion of organic matter as a PTF predictor improved the prediction of parameter a of van Genuchten. The performance of soil-class-specific PTFs was not better than of the general PTF. Except for the water content of saturated soil estimated by particle size distribution, the tested models for water content prediction at specific pressure heads proved satisfactory. Predictions of water content at pressure heads more negative than -0.6 m, using a PTF considering particle size distribution, are only slightly lower than those obtained by PTFs including bulk density and organic matter content.
Resumo:
Studies on water retention and availability are scarce for subtropical or humid temperate climate regions of the southern hemisphere. The aims of this study were to evaluate the relations of the soil physical, chemical, and mineralogical properties with water retention and availability for the generation and validation of continuous point pedotransfer functions (PTFs) for soils of the State of Santa Catarina (SC) in the South of Brazil. Horizons of 44 profiles were sampled in areas under different cover crops and regions of SC, to determine: field capacity (FC, 10 kPa), permanent wilting point (PWP, 1,500 kPa), available water content (AW, by difference), saturated hydraulic conductivity, bulk density, aggregate stability, particle size distribution (seven classes), organic matter content, and particle density. Chemical and mineralogical properties were obtained from the literature. Spearman's rank correlation analysis and path analysis were used in the statistical analyses. The point PTFs for estimation of FC, PWP and AW were generated for the soil surface and subsurface through multiple regression analysis, followed by robust regression analysis, using two sets of predictive variables. Soils with finer texture and/or greater organic matter content retain more moisture, and organic matter is the property that mainly controls the water availability to plants in soil surface horizons. Path analysis was useful in understanding the relationships between soil properties for FC, PWP and AW. The predictive power of the generated PTFs to estimate FC and PWP was good for all horizons, while AW was best estimated by more complex models with better prediction for the surface horizons of soils in Santa Catarina.
Resumo:
An algorithm for computing correlation filters based on synthetic discriminant functions that can be displayed on current spatial light modulators is presented. The procedure is nondivergent, computationally feasible, and capable of producing multiple solutions, thus overcoming some of the pitfalls of previous methods.
Resumo:
Over the past three decades, pedotransfer functions (PTFs) have been widely used by soil scientists to estimate soils properties in temperate regions in response to the lack of soil data for these regions. Several authors indicated that little effort has been dedicated to the prediction of soil properties in the humid tropics, where the need for soil property information is of even greater priority. The aim of this paper is to provide an up-to-date repository of past and recently published articles as well as papers from proceedings of events dealing with water-retention PTFs for soils of the humid tropics. Of the 35 publications found in the literature on PTFs for prediction of water retention of soils of the humid tropics, 91 % of the PTFs are based on an empirical approach, and only 9 % are based on a semi-physical approach. Of the empirical PTFs, 97 % are continuous, and 3 % (one) is a class PTF; of the empirical PTFs, 97 % are based on multiple linear and polynomial regression of n th order techniques, and 3 % (one) is based on the k-Nearest Neighbor approach; 84 % of the continuous PTFs are point-based, and 16 % are parameter-based; 97 % of the continuous PTFs are equation-based PTFs, and 3 % (one) is based on pattern recognition. Additionally, it was found that 26 % of the tropical water-retention PTFs were developed for soils in Brazil, 26 % for soils in India, 11 % for soils in other countries in America, and 11 % for soils in other countries in Africa.
Resumo:
Knowledge of the soil water retention curve (SWRC) is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs), which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972), Lal (1979), Aina & Periaswamy (1985), Arruda et al. (1987), Dijkerman (1988), Vereecken et al. (1989), Batjes (1996), van den Berg et al. (1997), Tomasella et al. (2000), Hodnett & Tomasella (2002), Oliveira et al. (2002), and Barros (2010). We used a database that includes soil texture (sand, silt, and clay), bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000) achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980) model, especially when tested in the top soil layer.
Resumo:
A systematic time-dependent perturbation scheme for classical canonical systems is developed based on a Wick's theorem for thermal averages of time-ordered products. The occurrence of the derivatives with respect to the canonical variables noted by Martin, Siggia, and Rose implies that two types of Green's functions have to be considered, the propagator and the response function. The diagrams resulting from Wick's theorem are "double graphs" analogous to those introduced by Dyson and also by Kawasaki, in which the response-function lines form a "tree structure" completed by propagator lines. The implication of a fluctuation-dissipation theorem on the self-energies is analyzed and compared with recent results by Deker and Haake.
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Taking into account the nature of the hydrological processes involved in in situ measurement of Field Capacity (FC), this study proposes a variation of the definition of FC aiming not only at minimizing the inadequacies of its determination, but also at maintaining its original, practical meaning. Analysis of FC data for 22 Brazilian soils and additional FC data from the literature, all measured according to the proposed definition, which is based on a 48-h drainage time after infiltration by shallow ponding, indicates a weak dependency on the amount of infiltrated water, antecedent moisture level, soil morphology, and the level of the groundwater table, but a strong dependency on basic soil properties. The dependence on basic soil properties allowed determination of FC of the 22 soil profiles by pedotransfer functions (PTFs) using the input variables usually adopted in prediction of soil water retention. Among the input variables, soil moisture content θ (6 kPa) had the greatest impact. Indeed, a linear PTF based only on it resulted in an FC with a root mean squared residue less than 0.04 m³ m-3 for most soils individually. Such a PTF proved to be a better FC predictor than the traditional method of using moisture content at an arbitrary suction. Our FC data were compatible with an equivalent and broader USA database found in the literature, mainly for medium-texture soil samples. One reason for differences between FCs of the two data sets of fine-textured soils is due to their different drainage times. Thus, a standardized procedure for in situ determination of FC is recommended.
Resumo:
In this paper, we develop a data-driven methodology to characterize the likelihood of orographic precipitation enhancement using sequences of weather radar images and a digital elevation model (DEM). Geographical locations with topographic characteristics favorable to enforce repeatable and persistent orographic precipitation such as stationary cells, upslope rainfall enhancement, and repeated convective initiation are detected by analyzing the spatial distribution of a set of precipitation cells extracted from radar imagery. Topographic features such as terrain convexity and gradients computed from the DEM at multiple spatial scales as well as velocity fields estimated from sequences of weather radar images are used as explanatory factors to describe the occurrence of localized precipitation enhancement. The latter is represented as a binary process by defining a threshold on the number of cell occurrences at particular locations. Both two-class and one-class support vector machine classifiers are tested to separate the presumed orographic cells from the nonorographic ones in the space of contributing topographic and flow features. Site-based validation is carried out to estimate realistic generalization skills of the obtained spatial prediction models. Due to the high class separability, the decision function of the classifiers can be interpreted as a likelihood or susceptibility of orographic precipitation enhancement. The developed approach can serve as a basis for refining radar-based quantitative precipitation estimates and short-term forecasts or for generating stochastic precipitation ensembles conditioned on the local topography.
Resumo:
Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland) forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2) and Akaike information criterion (AIC) and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.