51 resultados para Probability distribution functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on a corpus of English, German, and Polish spoken academic discourse, this article analyzes the distribution and function of humor in academic research presentations. The corpus is the result of a European research cooperation project consisting of 300,000 tokens of spoken academic language, focusing on the genres research presentation, student presentation, and oral examination. The article investigates difference between the German and English research cultures as expressed in the genre of specialist research presentations, and the role of humor as a pragmatic device in their respective contexts. The data is analyzed according to the paradigms of corpus-assisted discourse studies (CADS). The findings show that humor is used in research presentations as an expression of discourse reflexivity. They also reveal a considerable difference in the quantitative distribution of humor in research presentations depending on the educational, linguistic, and cultural background of the presenters, thus confirming the notion of different research cultures. Such research cultures nurture distinct attitudes to genres of academic language: whereas in one of the cultures identified researchers conform with the constraints and structures of the genre, those working in another attempt to subvert them, for example by the application of humor. © 2012 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The factors determining the size of individual β-amyloid (A,8) deposits and their size frequency distribution in tissue from Alzheimer's disease (AD) patients have not been established. In 23/25 cortical tissues from 10 AD patients, the frequency of Aβ deposits declined exponentially with increasing size. In a random sample of 400 Aβ deposits, 88% were closely associated with one or more neuronal cell bodies. The frequency distribution of (Aβ) deposits which were associated with 0,1,2,...,n neuronal cell bodies deviated significantly from a Poisson distribution, suggesting a degree of clustering of the neuronal cell bodies. In addition, the frequency of Aβ deposits declined exponentially as the number of associated neuronal cell bodies increased. Aβ deposit area was positively correlated with the frequency of associated neuronal cell bodies, the degree of correlation being greater for pyramidal cells than smaller neurons. These data suggested: (1) the number of closely adjacent neuronal cell bodies which simultaneously secrete Aβ was an important factor determining the size of an Aβ deposit and (2) the exponential decline in larger Aβ deposits reflects the low probability that larger numbers of adjacent neurons will secrete Aβ simultaneously to form a deposit. © 1995.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many populations consist of two classes only, e.g., alive or dead, present or absent, clean or dirty, infected or non-infected, and it is the proportion or percentage of observations that fall into one of these classes that is of interest to an investigator. An observation that falls into one of the two classes is considered a ‘success’ (S), and ‘p’ is defined as the proportion of observations falling into that class. If a random sample of size ‘n’ is obtained from a population, the probability of obtaining 0, 1, 2, 3, etc., successes is then given by the binomial distribution. The binomial distribution can be used as the basis of a number of statistical tests but is most useful when comparing two proportions. This statnote describes two such scenarios in which the binomial distribution is used to compare: (1) two proportions when the samples are independent and (2) two proportions when the samples are paired.