914 resultados para Probability Density Function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate assessment of the fate of salts, nutrients, and pollutants in natural, heterogeneous soils requires a proper quantification of both spatial and temporal solute spreading during solute movement. The number of experiments with multisampler devices that measure solute leaching as a function of space and time is increasing. The breakthrough curve (BTC) can characterize the temporal aspect of solute leaching, and recently the spatial solute distribution curve (SSDC) was introduced to describe the spatial solute distribution. We combined and extended both concepts to develop a tool for the comprehensive analysis of the full spatio-temporal behavior of solute leaching. The sampling locations are ranked in order of descending amount of total leaching (defined as the cumulative leaching from an individual compartment at the end of the experiment), thus collapsing both spatial axes of the sampling plane into one. The leaching process can then be described by a curved surface that is a function of the single spatial coordinate and time. This leaching surface is scaled to integrate to unity, and termed S can efficiently represent data from multisampler solute transport experiments or simulation results from multidimensional solute transport models. The mathematical relationships between the scaled leaching surface S, the BTC, and the SSDC are established. Any desired characteristic of the leaching process can be derived from S. The analysis was applied to a chloride leaching experiment on a lysimeter with 300 drainage compartments of 25 cm2 each. The sandy soil monolith in the lysimeter exhibited fingered flow in the water-repellent top layer. The observed S demonstrated the absence of a sharp separation between fingers and dry areas, owing to diverging flow in the wettable soil below the fingers. Times-to-peak, maximum solute fluxes, and total leaching varied more in high-leaching than in low-leaching compartments. This suggests a stochastic–convective transport process in the high-flow streamtubes, while convection–dispersion is predominant in the low-flow areas. S can be viewed as a bivariate probability density function. Its marginal distributions are the BTC of all sampling locations combined, and the SSDC of cumulative solute leaching at the end of the experiment. The observed S cannot be represented by assuming complete independence between its marginal distributions, indicating that S contains information about the leaching process that cannot be derived from the combination of the BTC and the SSDC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, a patchwork-based audio watermarking scheme has been proposed in [1], which embeds watermarks by modifying the means of absolute-valued discrete cosine transform (DCT) coefficients corresponding to suitable fragments. This audio watermarking scheme is more robust to common attacks than the existing counterparts. In this paper, we presents a detailed analysis of this audio watermarking scheme. We first derive a probability density function (pdf) of a random variable corresponding to the mean of an absolute-valued DCT fragment. Then, based on the obtained pdf, we show how watermarking parameters affect the performance of the concerned audio watermarking scheme. The analysis result provides a guideline for the selection of watermarking parameters. The effectiveness of our analysis is verified by simulations using a large number of real-world audio segments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing a watermarking method that is robust to cropping attack is a challenging task in image watermarking. The moment-based watermarking schemes show good robustness to common signal processing attacks and some geometric attacks but are sensitive to cropping attack. In this paper, we modify the moment-based approach to deal with cropping attack. Firstly, we find the probability density function (pdf) of the pixel value distribution from the original image. Secondly, we reshape and normalize the pdf of the pixel value distribution (PPVD) to form a two dimensional image. Then, the moment invariants are calculated from the PPVD image. Since PPVD is insensitive to cropping, the proposed method is robust to cropping attack. Besides, it also has high robustness against other common attacks. Experimental results demonstrate the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho demonstra como podemos usar opções sobre o Índice de Taxa Média de Depósitos Interfinanceiros de Um Dia (IDI) para extrair a função densidade de probabilidade (FDP) para os próximos passos do Comitê de Política Monetária (COPOM). Como a decisão do COPOM tem uma natureza discreta, podemos estimar a FDP usando Mínimo Quadrados Ordinários (MQO). Esta técnica permite incluir restrições sobre as probabilidades estimadas. As probabilidades calculadas usando opções sobre IDI são então comparadas com as probabilidades encontradas usando o Futuro de DI e as probabilidades calculadas através de pesquisas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose new circuits for the implementation of Radial Basis Functions such as Gaussian and Gaussian-like functions. These RBFs are obtained by the subtraction of two differential pair output currents in a folded cascode configuration. We also propose a multidimensional version based on the unidimensional circuits. SPICE simulation results indicate good functionality. These circuits are intended to be applied in the implementation of radial basis function networks. One possible application of these networks is transducer signal conditioning in aircraft and spacecraft vehicles onboard telemetry systems. Copyright 2008 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of the velocity of the particles with respect to the circular orbits of satellites that are around the Earth that the particles will cross, suggests a range of possible velocities of impact as a function of the altitude of the satellite. A study made from those results show that the maximum relative velocities occur at the semi-latus rectum, independent of the initial semi-major axis of the particle. Considering both the solar radiation pressure and the oblateness of the Earth, it is visible that a precession in the orbit occurs and there is also a variation in the eccentricity of the particle as a function of its orbital region and its size. This is important information, because the damage caused in a spacecraft depends on the impact velocity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. © 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A metodologia tradicional de identificação de parâmetros na análise modal de estruturas é realizada a partir de sinais medidos de força de entrada e de movimento de resposta da estrutura em condições laboratoriais controladas. Entretanto, quando é necessária a obtenção dos parâmetros modais de estruturas de máquinas em operação, as condições para controlar e medir a excitação nestas situações impossibilita a realização da análise modal tradicional. Neste caso, o teste modal é realizado utilizando somente dados de resposta do sistema. A Análise Modal Operacional (AMO) é um método de extração modal em que nenhuma excitação artificial necessita ser aplicada ao sistema, utilizando-se a própria excitação operacional como entrada para medição da resposta do sistema. A técnica clássica de Análise Modal Operacional NExT considera, para isso, que a excitação operacional do sistema seja um ruído branco. Esta técnica faz a consideração de que as funções de correlação obtidas de estruturas podem ser consideradas como funções de resposta ao impulso e então métodos tradicionais de identificação modal no domínio do tempo podem ser empregados. Entretanto, caso a excitação operacional contenha componentes harmônicos que se sobressaiam, estes podem ser confundidos como modos naturais do sistema. Neste trabalho é demonstrada que através da função densidade de probabilidade da banda estreita contendo o pico de um modo, é possível identifica-lo como natural ou operacional (proveniente da excitação operacional da estrutura). É apresentada também uma modificação no método de identificação modal Exponencial Complexa Mínimos Quadrados (LSCE), passando a considerar sinais harmônicos de freqüências conhecidas presentes na excitação operacional, em um ensaio utilizando a técnica NExT. Para validação desses métodos, utiliza-se um modelo teórico de parâmetros modais conhecidos analiticamente e como estudo de caso experimental, um sistema formado por uma viga bi-apoiada suportando um motor elétrico com desbalanceamento de massa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uma das necessidades da agricultura de precisão é avaliar a qualidade dos mapas dos atributos dos solos. Neste sentido, o presente trabalho objetivou avaliar o desempenho dos métodos geoestatísticos: krigagem ordinária e simulação sequencial gaussiana na predição espacial do diâmetro médio do cristal da goethita com 121 pontos amostrados em uma malha de 1 ha com espaçamentos regulares de 10 em 10 m. Após a análise textural e da concentração dos óxidos de ferro, calcularam-se os valores do diâmetro médio do cristal da goethita os quais foram analisados pela estatística descritiva e geoestatística; em seguida, foram utilizadas a krigagem ordinária e a simulação sequencial gaussiana. Com os resultados avaliou-se qual foi o método mais fiel para reproduzir as estatísticas, a função de densidade de probabilidade acumulada condicional e a estatística epsilon εy da amostra. As estimativas E-Type foram semelhantes à krigagem ordinária devido à minimização da variância. No entanto, a krigagem deixa de apresentar, em locais específicos, o grau de cristalinidade da goethita enquanto o mapa E-Type indicou que a simulação sequencial gaussiana deve ser utilizada ao invés de mapas de krigagem. Os mapas E-type devem ser preferíveis por apresentar melhor desempenho na modelagem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)