917 resultados para non-Gaussian process


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transthyretin (TTR) is a tetrameric beta-sheet-rich transporter protein directly involved in human amyloid diseases. Several classes of small molecules can bind to TTR delaying its amyloid fibril formation, thus being promising drug candidates to treat TTR amyloidoses. In the present study, we characterized the interactions of the synthetic triiodo L-thyronine analogs and thyroid hormone nuclear receptor TR beta-selecfive agonists GC-1 and GC-24 with the wild type and V30M variant of human transthyretin (TTR). To achieve this aim, we conducted in vitro TTR acid-mediated aggregation and isothermal titration calorimetry experiments and determined the TTR:GC-1 and TTR:GC-24 crystal structures. Our data indicate that both GC-1 and GC-24 bind to TTR in a non-cooperative manner and are good inhibitors of TTR aggregation, with dissociation constants for both hormone binding sites (HBS) in the low micromolar range. Analysis of the crystal structures of TTRwt:GC-1(24) complexes and their comparison with the TTRwt X-ray structure bound to its natural ligand thyroxine (T4) suggests, at the molecular level, the basis for the cooperative process displayed by T4 and the non-cooperative process provoked by both GC-1 and GC-24 during binding to TTR. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of event background fluctuations on charged particle jet reconstruction in Pb-Pb collisions at root s(NN) = 2.76 TeV has been measured with the ALICE experiment. The main sources of non-statistical fluctuations are characterized based purely on experimental data with an unbiased method, as well as by using single high p(t) particles and simulated jets embedded into real Pb-Pb events and reconstructed with the anti-k(t) jet finder. The influence of a low transverse momentum cut-off on particles used in the jet reconstruction is quantified by varying the minimum track p(t) between 0.15 GeV/c and 2 GeV/c. For embedded jets reconstructed from charged particles with p(t) > 0.15 GeV/c, the uncertainty in the reconstructed jet transverse momentum due to the heavy-ion background is measured to be 11.3 GeV/c (standard deviation) for the 10% most central Pb-Pb collisions, slightly larger than the value of 11.0 GeV/c measured using the unbiased method. For a higher particle transverse momentum threshold of 2 GeV/c, which will generate a stronger bias towards hard fragmentation in the jet finding process, the standard deviation of the fluctuations in the reconstructed jet transverse momentum is reduced to 4.8-5.0 GeV/c for the 10% most central events. A non-Gaussian tail of the momentum uncertainty is observed and its impact on the reconstructed jet spectrum is evaluated for varying particle momentum thresholds, by folding the measured fluctuations with steeply falling spectra.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Insect cuticular hydrocarbons including relatively non-volatile chemicals play important roles in cuticle protection and chemical communication. The conventional procedures for extracting cuticular compounds from insects require toxic solvents, or non-destructive techniques that do not allow storage of subsequent samples, such as the use of SPME fibers. In this study, we describe and tested a non-lethal process for extracting cuticular hydrocarbons with styrene-divinylbenzene copolymers, and illustrate the method with two species of bees and one species of beetle. The results demonstrate that these compounds can be efficiently trapped by ChromosorbA (R) (SUPELCO) and that this method can be used as an alternative to existing methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this Thesis is to investigate the possibility that the observations related to the epoch of reionization can probe not only the evolution of the IGM state, but also the cosmological background in which this process occurs. In fact, the history of the IGM ionization is indeed affected by the evolution of the sources of ionizing photons that, under the assumption of a structure formation paradigm determined by the hierarchic growth of the matter uctuations, results strongly dependent on the characteristics of the background universe. For the purpose of our investigation, we have analysed the reionization history in innovative cosmological frameworks, still in agreement with the recent observational tests related to the SNIa and the CMB probes, comparing our results with the reionization scenario predicted by the commonly used LCDM cosmology. In particular, in this Thesis we have considered two different alternative universes. The first one is a at universe dominated at late epochs by a dynamic dark energy component, characterized by an equation of state evolving in time. The second cosmological framework we have assumed is a LCDM characterized by a primordial overdensity field having a non-Gaussian probability distribution. The reionization scenario have been investigated, in this Thesis, through semi-analytic approaches based on the hierarichic growth of the matter uctuations and on suitable assumptions concerning the ionization and the recombination of the IGM. We make predictions for the evolution and the distribution of the HII regions, and for the global features of reionization, that can be constrained by future observations. Finally, we brie y discuss the possible future prospects of this Thesis work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, the NPMLE in the one-dimensional line segment problem is defined and studied, where line segments on the real line through two non-overlapping intervals are observed. The self-consistency equations for the NPMLE are defined and a quick algorithm for solving them is provided. Supnorm weak convergence to a Gaussian process and efficiency of the NPMLE is proved. The problem has a strong geological application in the study of the lifespan of species.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software metrics offer us the promise of distilling useful information from vast amounts of software in order to track development progress, to gain insights into the nature of the software, and to identify potential problems. Unfortunately, however, many software metrics exhibit highly skewed, non-Gaussian distributions. As a consequence, usual ways of interpreting these metrics --- for example, in terms of "average" values --- can be highly misleading. Many metrics, it turns out, are distributed like wealth --- with high concentrations of values in selected locations. We propose to analyze software metrics using the Gini coefficient, a higher-order statistic widely used in economics to study the distribution of wealth. Our approach allows us not only to observe changes in software systems efficiently, but also to assess project risks and monitor the development process itself. We apply the Gini coefficient to numerous metrics over a range of software projects, and we show that many metrics not only display remarkably high Gini values, but that these values are remarkably consistent as a project evolves over time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En entornos hostiles tales como aquellas instalaciones científicas donde la radiación ionizante es el principal peligro, el hecho de reducir las intervenciones humanas mediante el incremento de las operaciones robotizadas está siendo cada vez más de especial interés. CERN, la Organización Europea para la Investigación Nuclear, tiene alrededor de unos 50 km de superficie subterránea donde robots móviles controlador de forma remota podrían ayudar en su funcionamiento, por ejemplo, a la hora de llevar a cabo inspecciones remotas sobre radiación en los diferentes áreas destinados al efecto. No solo es preciso considerar que los robots deben ser capaces de recorrer largas distancias y operar durante largos periodos de tiempo, sino que deben saber desenvolverse en los correspondientes túneles subterráneos, tener en cuenta la presencia de campos electromagnéticos, radiación ionizante, etc. y finalmente, el hecho de que los robots no deben interrumpir el funcionamiento de los aceleradores. El hecho de disponer de un sistema de comunicaciones inalámbrico fiable y robusto es esencial para la correcta ejecución de las misiones que los robots deben afrontar y por supuesto, para evitar tales situaciones en las que es necesario la recuperación manual de los robots al agotarse su energía o al perder el enlace de comunicaciones. El objetivo de esta Tesis es proveer de las directrices y los medios necesarios para reducir el riesgo de fallo en la misión y maximizar las capacidades de los robots móviles inalámbricos los cuales disponen de almacenamiento finito de energía al trabajar en entornos peligrosos donde no se dispone de línea de vista directa. Para ello se proponen y muestran diferentes estrategias y métodos de comunicación inalámbrica. Teniendo esto en cuenta, se presentan a continuación los objetivos de investigación a seguir a lo largo de la Tesis: predecir la cobertura de comunicaciones antes y durante las misiones robotizadas; optimizar la capacidad de red inalámbrica de los robots móviles con respecto a su posición; y mejorar el rango operacional de esta clase de robots. Por su parte, las contribuciones a la Tesis se citan más abajo. El primer conjunto de contribuciones son métodos novedosos para predecir el consumo de energía y la autonomía en la comunicación antes y después de disponer de los robots en el entorno seleccionado. Esto es importante para proporcionar conciencia de la situación del robot y evitar fallos en la misión. El consumo de energía se predice usando una estrategia propuesta la cual usa modelos de consumo provenientes de diferentes componentes en un robot. La predicción para la cobertura de comunicaciones se desarrolla usando un nuevo filtro de RSS (Radio Signal Strength) y técnicas de estimación con la ayuda de Filtros de Kalman. El segundo conjunto de contribuciones son métodos para optimizar el rango de comunicaciones usando novedosas técnicas basadas en muestreo espacial que son robustas frente a ruidos de campos de detección y radio y que proporcionan redundancia. Se emplean métodos de diferencia central finitos para determinar los gradientes 2D RSS y se usa la movilidad del robot para optimizar el rango de comunicaciones y la capacidad de red. Este método también se valida con un caso de estudio centrado en la teleoperación háptica de robots móviles inalámbricos. La tercera contribución es un algoritmo robusto y estocástico descentralizado para la optimización de la posición al considerar múltiples robots autónomos usados principalmente para extender el rango de comunicaciones desde la estación de control al robot que está desarrollando la tarea. Todos los métodos y algoritmos propuestos se verifican y validan usando simulaciones y experimentos de campo con variedad de robots móviles disponibles en CERN. En resumen, esta Tesis ofrece métodos novedosos y demuestra su uso para: predecir RSS; optimizar la posición del robot; extender el rango de las comunicaciones inalámbricas; y mejorar las capacidades de red de los robots móviles inalámbricos para su uso en aplicaciones dentro de entornos peligrosos, que como ya se mencionó anteriormente, se destacan las instalaciones científicas con emisión de radiación ionizante. En otros términos, se ha desarrollado un conjunto de herramientas para mejorar, facilitar y hacer más seguras las misiones de los robots en entornos hostiles. Esta Tesis demuestra tanto en teoría como en práctica que los robots móviles pueden mejorar la calidad de las comunicaciones inalámbricas mediante la profundización en el estudio de su movilidad para optimizar dinámicamente sus posiciones y mantener conectividad incluso cuando no existe línea de vista. Los métodos desarrollados en la Tesis son especialmente adecuados para su fácil integración en robots móviles y pueden ser aplicados directamente en la capa de aplicación de la red inalámbrica. ABSTRACT In hostile environments such as in scientific facilities where ionising radiation is a dominant hazard, reducing human interventions by increasing robotic operations are desirable. CERN, the European Organization for Nuclear Research, has around 50 km of underground scientific facilities, where wireless mobile robots could help in the operation of the accelerator complex, e.g. in conducting remote inspections and radiation surveys in different areas. The main challenges to be considered here are not only that the robots should be able to go over long distances and operate for relatively long periods, but also the underground tunnel environment, the possible presence of electromagnetic fields, radiation effects, and the fact that the robots shall in no way interrupt the operation of the accelerators. Having a reliable and robust wireless communication system is essential for successful execution of such robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs out of energy or when the robot loses its communication link. The goal of this thesis is to provide means to reduce risk of mission failure and maximise mission capabilities of wireless mobile robots with finite energy storage capacity working in a radiation environment with non-line-of-sight (NLOS) communications by employing enhanced wireless communication methods. Towards this goal, the following research objectives are addressed in this thesis: predict the communication range before and during robotic missions; optimise and enhance wireless communication qualities of mobile robots by using robot mobility and employing multi-robot network. This thesis provides introductory information on the infrastructures where mobile robots will need to operate, the tasks to be carried out by mobile robots and the problems encountered in these environments. The reporting of research work carried out to improve wireless communication comprises an introduction to the relevant radio signal propagation theory and technology followed by explanation of the research in the following stages: An analysis of the wireless communication requirements for mobile robot for different tasks in a selection of CERN facilities; predictions of energy and communication autonomies (in terms of distance and time) to reduce risk of energy and communication related failures during missions; autonomous navigation of a mobile robot to find zone(s) of maximum radio signal strength to improve communication coverage area; and autonomous navigation of one or more mobile robots acting as mobile wireless relay (repeater) points in order to provide a tethered wireless connection to a teleoperated mobile robot carrying out inspection or radiation monitoring activities in a challenging radio environment. The specific contributions of this thesis are outlined below. The first sets of contributions are novel methods for predicting the energy autonomy and communication range(s) before and after deployment of the mobile robots in the intended environments. This is important in order to provide situational awareness and avoid mission failures. The energy consumption is predicted by using power consumption models of different components in a mobile robot. This energy prediction model will pave the way for choosing energy-efficient wireless communication strategies. The communication range prediction is performed using radio signal propagation models and applies radio signal strength (RSS) filtering and estimation techniques with the help of Kalman filters and Gaussian process models. The second set of contributions are methods to optimise the wireless communication qualities by using novel spatial sampling based techniques that are robust to sensing and radio field noises and provide redundancy features. Central finite difference (CFD) methods are employed to determine the 2-D RSS gradients and use robot mobility to optimise the communication quality and the network throughput. This method is also validated with a case study application involving superior haptic teleoperation of wireless mobile robots where an operator from a remote location can smoothly navigate a mobile robot in an environment with low-wireless signals. The third contribution is a robust stochastic position optimisation algorithm for multiple autonomous relay robots which are used for wireless tethering of radio signals and thereby to enhance the wireless communication qualities. All the proposed methods and algorithms are verified and validated using simulations and field experiments with a variety of mobile robots available at CERN. In summary, this thesis offers novel methods and demonstrates their use to predict energy autonomy and wireless communication range, optimise robots position to improve communication quality and enhance communication range and wireless network qualities of mobile robots for use in applications in hostile environmental characteristics such as scientific facilities emitting ionising radiations. In simpler terms, a set of tools are developed in this thesis for improving, easing and making safer robotic missions in hostile environments. This thesis validates both in theory and experiments that mobile robots can improve wireless communication quality by exploiting robots mobility to dynamically optimise their positions and maintain connectivity even when the (radio signal) environment possess non-line-of-sight characteristics. The methods developed in this thesis are well-suited for easier integration in mobile robots and can be applied directly at the application layer of the wireless network. The results of the proposed methods have outperformed other comparable state-of-the-art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.