27 resultados para Inverse Rendering
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
We solve two inverse spectral problems for star graphs of Stieltjes strings with Dirichlet and Neumann boundary conditions, respectively, at a selected vertex called root. The root is either the central vertex or, in the more challenging problem, a pendant vertex of the star graph. At all other pendant vertices Dirichlet conditions are imposed; at the central vertex, at which a mass may be placed, continuity and Kirchhoff conditions are assumed. We derive conditions on two sets of real numbers to be the spectra of the above Dirichlet and Neumann problems. Our solution for the inverse problems is constructive: we establish algorithms to recover the mass distribution on the star graph (i.e. the point masses and lengths of subintervals between them) from these two spectra and from the lengths of the separate strings. If the root is a pendant vertex, the two spectra uniquely determine the parameters on the main string (i.e. the string incident to the root) if the length of the main string is known. The mass distribution on the other edges need not be unique; the reason for this is the non-uniqueness caused by the non-strict interlacing of the given data in the case when the root is the central vertex. Finally, we relate of our results to tree-patterned matrix inverse problems.
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
The aim of this study was to develop a GST-based methodology for accurately measuring the degree of transverse isotropy in trabecular bone. Using femoral sub-regions scanned in high-resolution peripheral QCT (HR-pQCT) and clinical-level-resolution QCT, trabecular orientation was evaluated using the mean intercept length (MIL) and the gradient structure tensor (GST) on the HR-pQCT and QCT data, respectively. The influence of local degree of transverse isotropy (DTI) and bone mineral density (BMD) was incorporated into the investigation. In addition, a power based model was derived, rendering a 1:1 relationship between GST and MIL eigenvalues. A specific DTI threshold (DTI thres) was found for each investigated size of region of interest (ROI), above which the estimate of major trabecular direction of the GST deviated no more than 30° from the gold standard MIL in 95% of the remaining ROIs (mean error: 16°). An inverse relationship between ROI size and DTI thres was found for discrete ranges of BMD. A novel methodology has been developed, where transversal isotropic measures of trabecular bone can be obtained from clinical QCT images for a given ROI size, DTI thres and power coefficient. Including DTI may improve future clinical QCT finite-element predictions of bone strength and diagnoses of bone disease.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
Inverse fusion PCR cloning (IFPC) is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
Resumo:
The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism) depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.
Validation of the Swiss methane emission inventory by atmospheric observations and inverse modelling
Resumo:
Atmospheric inverse modelling has the potential to provide observation-based estimates of greenhouse gas emissions at the country scale, thereby allowing for an independent validation of national emission inventories. Here, we present a regional-scale inverse modelling study to quantify the emissions of methane (CH₄) from Switzerland, making use of the newly established CarboCount-CH measurement network and a high-resolution Lagrangian transport model. In our reference inversion, prior emissions were taken from the "bottom-up" Swiss Greenhouse Gas Inventory (SGHGI) as published by the Swiss Federal Office for the Environment in 2014 for the year 2012. Overall we estimate national CH₄ emissions to be 196 ± 18 Gg yr⁻¹ for the year 2013 (1σ uncertainty). This result is in close agreement with the recently revised SGHGI estimate of 206 ± 33 Gg yr⁻¹ as reported in 2015 for the year 2012. Results from sensitivity inversions using alternative prior emissions, uncertainty covariance settings, large-scale background mole fractions, two different inverse algorithms (Bayesian and extended Kalman filter), and two different transport models confirm the robustness and independent character of our estimate. According to the latest SGHGI estimate the main CH₄ source categories in Switzerland are agriculture (78 %), waste handling (15 %) and natural gas distribution and combustion (6 %). The spatial distribution and seasonal variability of our posterior emissions suggest an overestimation of agricultural CH₄ emissions by 10 to 20 % in the most recent SGHGI, which is likely due to an overestimation of emissions from manure handling. Urban areas do not appear as emission hotspots in our posterior results, suggesting that leakages from natural gas distribution are only a minor source of CH₄ in Switzerland. This is consistent with rather low emissions of 8.4 Gg yr⁻¹ reported by the SGHGI but inconsistent with the much higher value of 32 Gg yr⁻¹ implied by the EDGARv4.2 inventory for this sector. Increased CH₄ emissions (up to 30 % compared to the prior) were deduced for the north-eastern parts of Switzerland. This feature was common to most sensitivity inversions, which is a strong indicator that it is a real feature and not an artefact of the transport model and the inversion system. However, it was not possible to assign an unambiguous source process to the region. The observations of the CarboCount-CH network provided invaluable and independent information for the validation of the national bottom-up inventory. Similar systems need to be sustained to provide independent monitoring of future climate agreements.
Resumo:
In voice and alignment typology, a categorical distinction is generally made between inverse systems on the one hand and symmetrical voice systems on the other. A major reason for distinguishing between these two types is the assumption that inverse systems are governed by a hierarchy involving grammatical, semantic, and ontological criteria, while symmetrical voice systems are based on discourse-pragmatic factors. However, the two types also have several important properties in common, in particular the fact that they have more than one nonderived transitive construction. Based on data from three native languages of South America, we show that the line between the two types is not always easy to draw, and that features of the inverse type can coexist with those of the symmetrical-voice type in the same language.