839 resultados para hidden semi markov models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in on-farm water and soil fertility management through water harvesting may prove key to up-grade smallholder farming systems in dry sub-humid and semi-arid sub-Sahara Africa (SSA). The currently experienced yield levels are usually less than 1 t ha-1, i.e., 3-5 times lower than potential levels obtained by commercial farmers and researchers for similar agro-hydrological conditions. The low yield levels are ascribed to the poor crop water availability due to variable rainfall, losses in on-farm water balance and inherently low soil nutrient levels. To meet an increased food demand with less use of water and land in the region, requires farming systems that provide more yields per water unit and/or land area in the future. This thesis presents the results of a project on water harvesting system aiming to upgrade currently practised water management for maize (Zea mays, L.) in semi-arid SSA. The objectives were to a) quantify dry spell occurrence and potential impact in currently practised small-holder grain production systems, b) test agro-hydrological viability and compare maize yields in an on-farm experiment using combinations supplemental irrigation (SI) and fertilizers for maize, and c) estimate long-term changes in water balance and grain yields of a system with SI compared to farmers currently practised in-situ water harvesting. Water balance changes and crop growth were simulated in a 20-year perspective with models MAIZE1&2. Dry spell analyses showed that potentially yield-limiting dry spells occur at least 75% of seasons for 2 locations in semi-arid East Africa during a 20-year period. Dry spell occurrence was more frequent for crop cultivated on soil with low water-holding capacity than on high water-holding capacity. The analysis indicated large on-farm water losses as deep percolation and run-off during seasons despite seasonal crop water deficits. An on-farm experiment was set up during 1998-2001 in Machakos district, semi-arid Kenya. Surface run-off was collected and stored in a 300m3 earth dam. Gravity-fed supplemental irrigation was carried out to a maize field downstream of the dam. Combinations of no irrigation (NI), SI and 3 levels of N fertilizers (0, 30, 80 kg N ha-1) were applied. Over 5 seasons with rainfall ranging from 200 to 550 mm, the crop with SI and low nitrogen fertilizer gave 40% higher yields (**) than the farmers’ conventional in-situ water harvesting system. Adding only SI or only low nitrogen did not result in significantly different yields. Accounting for actual ability of a storage system and SI to mitigate dry spells, it was estimated that a farmer would make economic returns (after deduction of household consumption) between year 2-7 after investment in dam construction depending on dam sealant and labour cost used. Simulating maize growth and site water balance in a system of maize with SI increased annual grain yield with 35 % as a result of timely applications of SI. Field water balance changes in actual evapotranspiration (ETa) and deep percolation were insignificant with SI, although the absolute amount of ETa increased with 30 mm y-1 for crop with SI compared to NI. The dam water balance showed 30% productive outtake as SI of harvested water. Large losses due to seepage and spill-flow occurred from the dam. Water productivity (WP, of ETa) for maize with SI was on average 1 796 m3 per ton grain, and for maize without SI 2 254 m3 per ton grain, i.e, a decerase of WP with 25%. The water harvesting system for supplemental irrigation of maize was shown to be both biophysically and economically viable. However, adoption by farmers will depend on other factors, including investment capacity, know-how and legislative possibilities. Viability of increased water harvesting implementation in a catchment scale needs to be assessed so that other down-stream uses of water remains uncompromised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus ein­em Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Glei­chung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrö­din­ger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells be­rück­sich­tigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Glei­chung mit Fokker-Planck Kollisions-Ope­ra­tor hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Max­well­ians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Ge­schwin­dig­keit. Dadurch bleibt die Gleichspannungs-Kurve für die Re­so­nanz-Tunnel­diode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Ge­schwin­dig­keits-Term die Lösung des Systems stabilisiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The biogenic production of NO in the soil accounts for between 10% and 40% of the global total. A large degree of the uncertainty in the estimation of the biogenic emissions stems from a shortage of measurements in arid regions, which comprise 40% of the earth’s land surface area. This study examined the emission of NO from three ecosystems in southern Africa which cover an aridity gradient from semi-arid savannas in South Africa to the hyper-arid Namib Desert in Namibia. A laboratory method was used to determine the release of NO as a function of the soil moisture and the soil temperature. Various methods were used to up-scale the net potential NO emissions determined in the laboratory to the vegetation patch, landscape or regional level. The importance of landscape, vegetation and climatic characteristics is emphasized. The first study occurred in a semi-arid savanna region in South Africa, where soils were sampled from 4 landscape positions in the Kruger National Park. The maximum NO emission occurred at soil moisture contents of 10%-20% water filled pore space (WFPS). The highest net potential NO emissions came from the low lying landscape positions, which have the largest nitrogen (N) stocks and the largest input of N. Net potential NO fluxes obtained in the laboratory were converted in field fluxes for the period 2003-2005, for the four landscape positions, using soil moisture and temperature data obtained in situ at the Kruger National Park Flux Tower Site. The NO emissions ranged from 1.5-8.5 kg ha-1 a-1. The field fluxes were up-scaled to a regional basis using geographic information system (GIS) based techniques, this indicated that the highest NO emissions occurred from the Midslope positions due to their large geographical extent in the research area. Total emissions ranged from 20x103 kg in 2004 to 34x103 kg in 2003 for the 56000 ha Skukuza land type. The second study occurred in an arid savanna ecosystem in the Kalahari, Botswana. In this study I collected soils from four differing vegetation patch types including: Pan, Annual Grassland, Perennial Grassland and Bush Encroached patches. The maximum net potential NO fluxes ranged from 0.27 ng m-2 s-1 in the Pan patches to 2.95 ng m-2 s-1 in the Perennial Grassland patches. The net potential NO emissions were up-scaled for the year December 2005-November 2006. This was done using 1) the net potential NO emissions determined in the laboratory, 2) the vegetation patch distribution obtained from LANDSAT NDVI measurements 3) estimated soil moisture contents obtained from ENVISAT ASAR measurements and 4) soil surface temperature measurements using MODIS 8 day land surface temperature measurements. This up-scaling procedure gave NO fluxes which ranged from 1.8 g ha-1 month-1 in the winter months (June and July) to 323 g ha-1 month-1 in the summer months (January-March). Differences occurred between the vegetation patches where the highest NO fluxes occurred in the Perennial Grassland patches and the lowest in the Pan patches. Over the course of the year the mean up-scaled NO emission for the studied region was 0.54 kg ha-1 a-1 and accounts for a loss of approximately 7.4% of the estimated N input to the region. The third study occurred in the hyper-arid Namib Desert in Namibia. Soils were sampled from three ecosystems; Dunes, Gravel Plains and the Riparian zone of the Kuiseb River. The net potential NO flux measured in the laboratory was used to estimate the NO flux for the Namib Desert for 2006 using modelled soil moisture and temperature data from the European Centre for Medium Range Weather Forecasts (ECMWF) operational model on a 36km x 35km spatial resolution. The maximum net potential NO production occurred at low soil moisture contents (<10%WFPS) and the optimal temperature was 25°C in the Dune and Riparian ecosystems and 35°C in the Gravel Plain Ecosystems. The maximum net potential NO fluxes ranged from 3.0 ng m-2 s-1 in the Riparian ecosystem to 6.2 ng m-2 s-1 in the Gravel Plains ecosystem. Up-scaling the net potential NO flux gave NO fluxes of up to 0.062 kg ha-1 a-1 in the Dune ecosystem and 0.544 kg h-1 a-1 in the Gravel Plain ecosystem. From these studies it is shown that NO is emitted ubiquitously from terrestrial ecosystems, as such the NO emission potential from deserts and scrublands should be taken into account in the global NO models. The emission of NO is influenced by various factors such as landscape, vegetation and climate. This study looks at the potential emissions from certain arid and semi-arid environments in southern Africa and other parts of the world and discusses some of the important factors controlling the emission of NO from the soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is focused on axions and axion like particles (ALPs) and their possible relation with the 3.55 keV photon line detected, in recent years, from galaxy clusters and other astrophysical objects. We focus on axions that come from string compactification and we study the vacuum structure of the resulting low energy 4D N=1 supergravity effective field theory. We then provide a model which might explain the 3.55 keV line through the following processes. A 7.1 keV dark matter axion decays in two light axions, which, in turn, are transformed into photons thanks to the Primakoff effect and the existence of a kinetic mixing between two U(1)s gauge symmetries belonging respectively to the hidden and the visible sector. We present two models, the first one gives an outcome inconsistent with experimental data, while the second can yield the desired result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we use morphological and numerical methods to test the hypothesis that seasonally formed fracture patterns in the Martian polar regions result from the brittle failure of seasonal CO2 slab ice. The observations by the High Resolution Imaging Science Experiment (HiRISE) of polar regions of Mars show very narrow dark elongated linear patterns that are observed during some periods of time in spring, disappear in summer and re-appear again in the following spring. They are repeatedly formed in the same areas but they do not repeat the exact pattern from year to year. This leads to the conclusion that they are cracks formed in the seasonal ice layer. Some of models of seasonal surface processes rely on the existence of a transparent form of CO2 ice, so-called slab ice. For the creation of the observed cracks the ice is required to be a continuous media, not an agglomeration of relatively separate particles like a firn. The best explanation for our observations is a slab ice with relatively high transparency in the visible wavelength range. This transparency allows a solid state green-house effect to act underneath the ice sheet raising the pressure by sublimation from below. The trapped gas creates overpressure and the ice sheet breaks at some point creating the observed cracks. We show that the times when the cracks appear are in agreement with the model calculation, providing one more piece of evidence that CO2 slab ice covers polar areas in spring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limitations associated with the visual information provided to surgeons during laparoscopic surgery increases the difficulty of procedures and thus, reduces clinical indications and increases training time. This work presents a novel augmented reality visualization approach that aims to improve visual data supplied for the targeting of non visible anatomical structures in laparoscopic visceral surgery. The approach aims to facilitate the localisation of hidden structures with minimal damage to surrounding structures and with minimal training requirements. The proposed augmented reality visualization approach incorporates endoscopic images overlaid with virtual 3D models of underlying critical structures in addition to targeting and depth information pertaining to targeted structures. Image overlay was achieved through the implementation of camera calibration techniques and integration of the optically tracked endoscope into an existing image guidance system for liver surgery. The approach was validated in accuracy, clinical integration and targeting experiments. Accuracy of the overlay was found to have a mean value of 3.5 mm ± 1.9 mm and 92.7% of targets within a liver phantom were successfully located laparoscopically by non trained subjects using the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately