56 resultados para Supervector kernel
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.
Resumo:
Actualmente existe un gran interés por ampliar las fuentes de energías alternativas para aviación y conseguir con ello una reducción de la huella de carbono y de la fuerte dependencia energética de los combustibles fósiles en diferentes países. Por ello, se están llevando a cabo muchos estudios de investigación que tienen por objetivo la conversión de la materia prima vegetal o biomasa en una nueva fuente de energía. Sin embargo, la sustitución exitosa de los combustibles derivados del petróleo por biocombustibles, requiere el cumplimiento de unos requisitos estrictos, y unas propiedades adecuadas. Este proyecto estudia la compatibilidad de materiales con las mezclas de bioqueroseno de coco (CBK20), babasú (BBK20) y palmiste (PBK20), con queroseno comercial Jet A-1 (K-2). Los materiales estudiados son poliméricos, metálicos y composites de aviación que forman parte del sistema combustible del avión. Este estudio pretende demostrar que tanto los materiales utilizados, como los combustibles investigados, son compatibles cuando se encuentran en contacto a cierta temperatura. Para ello, se han comparado sus propiedades siguiendo las normas de referencia establecidas. ABSTRACT Currently there is a strong interest to expand alternative energy sources for aviation and thereby achieve a reduction in carbon footprint and the strong energy dependence on fossil fuels in different countries. It is therefore being carried out many researches based on the conversion of vegetable feedstock in a new energy source. However, a successful replacement of petroleum fuels with biofuels, requires compliance with strict requirements and suitable properties. This project studies the materials compatibility with blends of coconut (CBK20), babassu (BBK20) and palm kernel (PBK20) biokerosene with commercial aviation jet fuel Jet A-1 (K-2). Polymeric and elastomeric materials, metals and aviation composites has been studied as part of the aircraft fuel system. The objective of this study is to demonstrate that both, the tested materials and the fuels investigated, are compatible when they are in contact at a certain temperature. For this reason, materials and kerosene properties have been compared using the standard test methods
Resumo:
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Resumo:
Evaluating the seismic hazard requires establishing a distribution of the seismic activity rate, irrespective of the methodology used in the evaluation. In practice, how that activity rate is established tends to be the main difference between the various evaluation methods. The traditional procedure relies on a seismogenic zonation and the Gutenberg-Richter (GR) hypothesis. Competing zonations are often compared looking only at the geometry of the zones, but the resulting activity rate is affected by both geometry and the values assigned to the GR parameters. Contour plots can be used for conducting more meaningful comparisons, providing the GR parameters are suitably normalised. More recent approaches for establishing the seismic activity rate forego the use of zones and GR statistics and special attention is paid here to such procedures. The paper presents comparisons between the local activity rates that result for the complete Iberian Peninsula using kernel estimators as well as two seismogenic zonations. It is concluded that the smooth variation of the seismic activity rate produced by zoneless methods is more realistic than the stepwise changes associated with zoned approaches; moreover, the choice of zonation often has a stronger influence on the results than its fairly subjective origin would warrant. It is also observed that the activity rate derived from the kernel approach, related with the GR parameter “a”, is qualitatively consistent with the epicentres in the catalogue. Finally, when comparing alternative zonations it is not just their geometry but the distribution of activity rate that should be compared.
Transformation�based implementation and optimization of programs exploiting the basic Andorra model.
Resumo:
The characteristics of CC and CLP systems are in principle very dierent However a recent trend towards convergence in the implementation techniques for these systems can be observed While CLP and Prolog systems have been incorporating capabilities to deal with userdened suspension and coroutining CC compilers have been trying to coalesce negrained tasks into coarsergrained sequential threads This convergence of techniques opens up the possibility of having a general purpose kernel language and abstract machine to serve as a compilation target for a variety of userlevel languages We propose a transformation technique directed towards such an objective In particular we report on techniques to support the Andorra computational model essentially emulating the AndorraI system via program transformation into a sequential language with delay primitives The system is automatic comprising an optional program analyzer and a basic transformer to the kernel language It turns out that a simple parallel CLP or Prolog system with dynamic scheduling is sucient as a kernel language for this purpose The preliminary results are quite encouraging performance of the resulting system is comparable to the current AndorraI implementation.
Resumo:
We discuss experiences gained by porting a Software Validation Facility (SVF) and a satellite Central Software (CSW) to a platform with support for Time and Space Partitioning (TSP). The SVF and CSW are part of the EagleEye Reference mission of the European Space Agency (ESA). As a reference mission, EagleEye is a perfect candidate to evaluate practical aspects of developing satellite CSW for and on TSP platforms. The specific TSP platform we used consists of a simulate D LEON3 CPU controlled by the XtratuM separation micro-kernel. On top of this, we run five separate partitions. Each partition ru n s its own real-time operating system or Ada run-time kernel, which in turn are running the application software of the CSW. We describe issues related to partitioning; inter-partition communication; scheduling; I/O; and fault-detection, isolation, and recovery (FDIR)
Resumo:
Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.
Resumo:
Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
The use of biofuels in the aviation sector has economic and environmental benefits. Among the options for the production of renewable jet fuels, hydroprocessed esters and fatty acids (HEFA) have received predominant attention in comparison with fatty acid methyl esters (FAME), which are not approved as additives for jet fuels. However, the presence of oxygen in methyl esters tends to reduce soot emissions and therefore particulate matter emissions. This sooting tendency is quantified in this work with an oxygen-extended sooting index, based on smoke point measurements. Results have shown considerable reduction in the sooting tendency for all biokerosenes (produced by transesterification and eventually distillation) with respect to fossil kerosenes. Among the tested biokerosenes, that made from palm kernel oil was the most effective one, and nondistilled methyl esters (from camelina and linseed oils) showed lower effectiveness than distilled biokerosenes to reduce the sooting tendency. These results may constitute an additional argument for the use of FAME’s as blend components of jet fuels. Other arguments were pointed out in previous publications, but some controversy has aroused over the use of these components. Some of the criticism was based on the fact that the methods used in our previous work are not approved for jet fuels in the standard methods and concluded that the use of FAME in any amount is, thus, inappropriate. However, some of the standard methods are not updated for considering oxygenated components (like the method for obtaining the lower heating value), and others are not precise enough (like the methods for measuring the freezing point), whereas some alternative methods may provide better reproducibility for oxygenated fuels.
Resumo:
El geoide, definido como la superficie equipotencial que mejor se ajusta (en el sentido de los mínimos cuadrados) al nivel medio del mar en una determinada época, es la superficie que utilizamos como referencia para determinar las altitudes ortométricas. Si disponemos de una superficie equipotencial de referencia como dátum altimétrico preciso o geoide local, podemos entonces determinar las altitudes ortométricas de forma eficiente a partir de las altitudes elipsoidales proporcionadas por el Sistema Global de Navegación por Satélite (Global Navigation Satellite System, GNSS ). Como es sabido uno de los problemas no resueltos de la geodesia (quizás el más importante de los mismos en la actualidad) es la carencia de un dátum altimétrico global (Sjoberg, 2011) con las precisiones adecuadas. Al no existir un dátum altimétrico global que nos permita obtener los valores absolutos de la ondulación del geoide con la precisión requerida, es necesario emplear modelos geopotenciales como alternativa. Recientemente fue publicado el modelo EGM2008 en el que ha habido una notable mejoría de sus tres fuentes de datos, por lo que este modelo contiene coeficientes adicionales hasta el grado 2190 y orden 2159 y supone una sustancial mejora en la precisión (Pavlis et al., 2008). Cuando en una región determinada se dispone de valores de gravedad y Modelos Digitales del Terreno (MDT) de calidad, es posible obtener modelos de superficies geopotenciales más precisos y de mayor resolución que los modelos globales. Si bien es cierto que el Servicio Nacional Geodésico de los Estados Unidos de América (National Geodetic Survey, NGS) ha estado desarrollando modelos del geoide para la región de los Estados Unidos de América continentales y todos sus territorios desde la década de los noventa, también es cierto que las zonas de Puerto Rico y las Islas Vírgenes Estadounidenses han quedado un poco rezagadas al momento de poder aplicar y obtener resultados de mayor precisión con estos modelos regionales del geoide. En la actualidad, el modelo geopotencial regional vigente para la zona de Puerto Rico y las Islas Vírgenes Estadounidenses es el GEOID12A (Roman y Weston, 2012). Dada la necesidad y ante la incertidumbre de saber cuál sería el comportamiento de un modelo del geoide desarrollado única y exclusivamente con datos de gravedad locales, nos hemos dado a la tarea de desarrollar un modelo de geoide gravimétrico como sistema de referencia para las altitudes ortométricas. Para desarrollar un modelo del geoide gravimétrico en la isla de Puerto Rico, fue necesario implementar una metodología que nos permitiera analizar y validar los datos de gravedad terrestre existentes. Utilizando validación por altimetría con sistemas de información geográfica y validación matemática por colocación con el programa Gravsoft (Tscherning et al., 1994) en su modalidad en Python (Nielsen et al., 2012), fue posible validar 1673 datos de anomalías aire libre de un total de 1894 observaciones obtenidas de la base de datos del Bureau Gravimétrico Internacional (BGI). El aplicar estas metodologías nos permitió obtener una base de datos anomalías de la gravedad fiable la cual puede ser utilizada para una gran cantidad de aplicaciones en ciencia e ingeniería. Ante la poca densidad de datos de gravedad existentes, fue necesario emplear un método alternativo para densificar los valores de anomalías aire libre existentes. Empleando una metodología propuesta por Jekeli et al. (2009b) se procedió a determinar anomalías aire libre a partir de los datos de un MDT. Estas anomalías fueron ajustadas utilizando las anomalías aire libre validadas y tras aplicar un ajuste de mínimos cuadrados por zonas geográficas, fue posible obtener una malla de datos de anomalías aire libre uniforme a partir de un MDT. Tras realizar las correcciones topográficas, determinar el efecto indirecto de la topografía del terreno y la contribución del modelo geopotencial EGM2008, se obtuvo una malla de anomalías residuales. Estas anomalías residuales fueron utilizadas para determinar el geoide gravimétrico utilizando varias técnicas entre las que se encuentran la aproximación plana de la función de Stokes y las modificaciones al núcleo de Stokes, propuestas por Wong y Gore (1969), Vanicek y Kleusberg (1987) y Featherstone et al. (1998). Ya determinados los distintos modelos del geoide gravimétrico, fue necesario validar los mismos y para eso se utilizaron una serie de estaciones permanentes de la red de nivelación del Datum Vertical de Puerto Rico de 2002 (Puerto Rico Vertical Datum 2002, PRVD02 ), las cuales tenían publicados sus valores de altitud elipsoidal y elevación. Ante la ausencia de altitudes ortométricas en las estaciones permanentes de la red de nivelación, se utilizaron las elevaciones obtenidas a partir de nivelación de primer orden para determinar los valores de la ondulación del geoide geométrico (Roman et al., 2013). Tras establecer un total de 990 líneas base, se realizaron dos análisis para determinar la 'precisión' de los modelos del geoide. En el primer análisis, que consistió en analizar las diferencias entre los incrementos de la ondulación del geoide geométrico y los incrementos de la ondulación del geoide de los distintos modelos (modelos gravimétricos, EGM2008 y GEOID12A) en función de las distancias entre las estaciones de validación, se encontró que el modelo con la modificación del núcleo de Stokes propuesta por Wong y Gore presentó la mejor 'precisión' en un 91,1% de los tramos analizados. En un segundo análisis, en el que se consideraron las 990 líneas base, se determinaron las diferencias entre los incrementos de la ondulación del geoide geométrico y los incrementos de la ondulación del geoide de los distintos modelos (modelos gravimétricos, EGM2008 y GEOID12A), encontrando que el modelo que presenta la mayor 'precisión' también era el geoide con la modificación del núcleo de Stokes propuesta por Wong y Gore. En este análisis, el modelo del geoide gravimétrico de Wong y Gore presento una 'precisión' de 0,027 metros en comparación con la 'precisión' del modelo EGM2008 que fue de 0,031 metros mientras que la 'precisión' del modelo regional GEOID12A fue de 0,057 metros. Finalmente podemos decir que la metodología aquí presentada es una adecuada ya que fue posible obtener un modelo del geoide gravimétrico que presenta una mayor 'precisión' que los modelos geopotenciales disponibles, incluso superando la precisión del modelo geopotencial global EGM2008. ABSTRACT The geoid, defined as the equipotential surface that best fits (in the least squares sense) to the mean sea level at a particular time, is the surface used as a reference to determine the orthometric heights. If we have an equipotential reference surface or a precise local geoid, we can then determine the orthometric heights efficiently from the ellipsoidal heights, provided by the Global Navigation Satellite System (GNSS). One of the most common and important an unsolved problem in geodesy is the lack of a global altimetric datum (Sjoberg, 2011)) with the appropriate precision. In the absence of one which allows us to obtain the absolute values of the geoid undulation with the required precision, it is necessary to use alternative geopotential models. The EGM2008 was recently published, in which there has been a marked improvement of its three data sources, so this model contains additional coefficients of degree up to 2190 and order 2159, and there is a substantial improvement in accuracy (Pavlis et al., 2008). When a given region has gravity values and high quality digital terrain models (DTM), it is possible to obtain more accurate regional geopotential models, with a higher resolution and precision, than global geopotential models. It is true that the National Geodetic Survey of the United States of America (NGS) has been developing geoid models for the region of the continental United States of America and its territories from the nineties, but which is also true is that areas such as Puerto Rico and the U.S. Virgin Islands have lagged behind when to apply and get more accurate results with these regional geopotential models. Right now, the available geopotential model for Puerto Rico and the U.S. Virgin Islands is the GEOID12A (Roman y Weston, 2012). Given this need and given the uncertainty of knowing the behavior of a regional geoid model developed exclusively with data from local gravity, we have taken on the task of developing a gravimetric geoid model to use as a reference system for orthometric heights. To develop a gravimetric geoid model in the island of Puerto Rico, implementing a methodology that allows us to analyze and validate the existing terrestrial gravity data is a must. Using altimetry validation with GIS and mathematical validation by collocation with the Gravsoft suite programs (Tscherning et al., 1994) in its Python version (Nielsen et al., 2012), it was possible to validate 1673 observations with gravity anomalies values out of a total of 1894 observations obtained from the International Bureau Gravimetric (BGI ) database. Applying these methodologies allowed us to obtain a database of reliable gravity anomalies, which can be used for many applications in science and engineering. Given the low density of existing gravity data, it was necessary to employ an alternative method for densifying the existing gravity anomalies set. Employing the methodology proposed by Jekeli et al. (2009b) we proceeded to determine gravity anomaly data from a DTM. These anomalies were adjusted by using the validated free-air gravity anomalies and, after that, applying the best fit in the least-square sense by geographical area, it was possible to obtain a uniform grid of free-air anomalies obtained from a DTM. After applying the topographic corrections, determining the indirect effect of topography and the contribution of the global geopotential model EGM2008, a grid of residual anomalies was obtained. These residual anomalies were used to determine the gravimetric geoid by using various techniques, among which are the planar approximation of the Stokes function and the modifications of the Stokes kernel, proposed by Wong y Gore (1969), Vanicek y Kleusberg (1987) and Featherstone et al. (1998). After determining the different gravimetric geoid models, it was necessary to validate them by using a series of stations of the Puerto Rico Vertical Datum of 2002 (PRVD02) leveling network. These stations had published its values of ellipsoidal height and elevation, and in the absence of orthometric heights, we use the elevations obtained from first - order leveling to determine the geometric geoid undulation (Roman et al., 2013). After determine a total of 990 baselines, two analyzes were performed to determine the ' accuracy ' of the geoid models. The first analysis was to analyze the differences between the increments of the geometric geoid undulation with the increments of the geoid undulation of the different geoid models (gravimetric models, EGM2008 and GEOID12A) in function of the distance between the validation stations. Through this analysis, it was determined that the model with the modified Stokes kernel given by Wong and Gore had the best 'accuracy' in 91,1% for the analyzed baselines. In the second analysis, in which we considered the 990 baselines, we analyze the differences between the increments of the geometric geoid undulation with the increments of the geoid undulation of the different geoid models (gravimetric models, EGM2008 and GEOID12A) finding that the model with the highest 'accuracy' was also the model with modifying Stokes kernel given by Wong and Gore. In this analysis, the Wong and Gore gravimetric geoid model presented an 'accuracy' of 0,027 meters in comparison with the 'accuracy' of global geopotential model EGM2008, which gave us an 'accuracy' of 0,031 meters, while the 'accuracy ' of the GEOID12A regional model was 0,057 meters. Finally we can say that the methodology presented here is adequate as it was possible to obtain a gravimetric geoid model that has a greater 'accuracy' than the geopotential models available, even surpassing the accuracy of global geopotential model EGM2008.
Resumo:
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. Besides, it has been the cornerstone for a significant mathematical literature on the topic of sampling theorems associated with differential and difference problems. In this work we provide, in an unified way, new and old generalizations of this result corresponding to various different settings; all these generalizations are illustrated with examples. All the different situations along the paper share a basic approach: the functions to be sampled are obtaining by duality in a separable Hilbert space H through an H -valued kernel K defined on an appropriate domain.
Resumo:
Three different oils: babassu, coconut and palm kernel have been transesterified with methanol. The fatty acid methyl esters (FAME) have been subjected to vacuum fractional distillation, and the low boiling point fractions have been blended with fossil kerosene at three different proportions: 5, 10 and 20% vol.
Resumo:
Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.
Resumo:
Purely data-driven approaches for machine learning present difficulties when data are scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data-driven modeling with a physical model of the system. We show how different, physically inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology, and geostatistics.