11 resultados para Over sampling

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Advanced Very High Resolution Radiometer (AVHRR) carried on board the National Oceanic and Atmospheric Administration (NOAA) and the Meteorological Operational Satellite (MetOp) polar orbiting satellites is the only instrument offering more than 25 years of satellite data to analyse aerosols on a daily basis. The present study assessed a modified AVHRR aerosol optical depth τa retrieval over land for Europe. The algorithm might also be applied to other parts of the world with similar surface characteristics like Europe, only the aerosol properties would have to be adapted to a new region. The initial approach used a relationship between Sun photometer measurements from the Aerosol Robotic Network (AERONET) and the satellite data to post-process the retrieved τa. Herein a quasi-stand-alone procedure, which is more suitable for the pre-AERONET era, is presented. In addition, the estimation of surface reflectance, the aerosol model, and other processing steps have been adapted. The method's cross-platform applicability was tested by validating τa from NOAA-17 and NOAA-18 AVHRR at 15 AERONET sites in Central Europe (40.5° N–50° N, 0° E–17° E) from August 2005 to December 2007. Furthermore, the accuracy of the AVHRR retrieval was related to products from two newer instruments, the Medium Resolution Imaging Spectrometer (MERIS) on board the Environmental Satellite (ENVISAT) and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Aqua/Terra. Considering the linear correlation coefficient R, the AVHRR results were similar to those of MERIS with even lower root mean square error RMSE. Not surprisingly, MODIS, with its high spectral coverage, gave the highest R and lowest RMSE. Regarding monthly averaged τa, the results were ambiguous. Focusing on small-scale structures, R was reduced for all sensors, whereas the RMSE solely for MERIS substantially increased. Regarding larger areas like Central Europe, the error statistics were similar to the individual match-ups. This was mainly explained with sampling issues. With the successful validation of AVHRR we are now able to concentrate on our large data archive dating back to 1985. This is a unique opportunity for both climate and air pollution studies over land surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: For almost 30 years, phosphatidylethanol (PEth) has been known as a direct marker of alcohol consumption. This marker stands for consumption in high amounts and for a longer time period, but it has been also detected after 1 high single intake of ethanol (EtOH). The aim of this study was to obtain further information about the formation and elimination of PEth 16:0/18:1 by simulating extensive drinking. METHODS: After 3 weeks of alcohol abstinence, 11 test persons drank an amount of EtOH leading to an estimated blood ethanol concentration of 1 g/kg on each of 5 successive days. After the drinking episode, they stayed abstinent for 16 days with regular blood sampling. PEth 16:0/18:1 analysis was performed using liquid chromatography-tandem mass spectrometry (high-performance liquid chromatography 1100 system and QTrap 2000 triple quadrupole linear ion trap mass spectrometer. Values of blood alcohol were obtained using a standardized method with headspace gas chromatography flame ionization detector. RESULTS: Maximum measured concentrations of EtOH were 0.99 to 1.83 g/kg (mean 1.32 g/kg). These values were reached 1 to 3 hours after the start of drinking (mean 1.9 hours). For comparison, 10 of 11 volunteers had detectable PEth 16:0/18:1 values 1 hour after the start of drinking, ranging from 45 to 138 ng/ml PEth 16:0/18:1. Over the following days, concentrations of PEth 16:0/18:1 increased continuously and reached the maximum concentrations of 74 to 237 ng/ml between days 3 and 6. CONCLUSIONS: This drinking experiment led to measurable PEth concentrations. However, PEth 16:0/18:1 concentrations stayed rather low compared with those of alcohol abusers from previous studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The lectin pathway of complement activation, in particular mannose-binding lectin (MBL), has been extensively investigated over recent years. So far, studies were exclusively based on venous samples. The aim of this study was to investigate whether measurements of lectin pathway proteins obtained by capillary sampling are in agreement with venous samples. Methods: Prospective study including 31 infants that were admitted with suspected early-onset sepsis. Lectin pathway proteins were measured in simultaneously obtained capillary and venous samples. Bland–Altman plots of logarithmized results were constructed, and the mean capillary to venous ratios (ratiocap/ven) were calculated with their 95% confidence intervals (CI). Results: The agreement between capillary and venous sampling was very high for MBL (mean ratiocap/ven, 1.01; 95% CI, 0.85–1.19). Similarly, high agreement was observed for H-ficolin (mean ratiocap/ven, 1.02; 95% CI, 0.72–1.44), MASP-2 (1.04; 0.59–1.84), MASP-3 (0.96; 0.71–1.28), and MAp44 (1.01; 0.82–1.25), while the agreement was moderate for M-ficolin (mean ratiocap/ven, 0.78; 95% CI, 0.27–2.28). Conclusions: The results of this study show an excellent agreement between capillary and venous samples for most lectin pathway proteins. Except for M-ficolin, small volume capillary samples can thus be used when assessing lectin pathway proteins in neonates and young children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in tree-ring-based quantifications of forest growth and increase the comparability of datasets beyond individual studies, investigators, laboratories, and geographical boundaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of hindcast climatic data is quite extended for multiple applications. However, this approach needs the support of a validation process to allow its drawbacks and, therefore, confidence levels to be assessed. In this work, the strategy relies on an hourly wind database resulting from a dynamical downscaling experiment, with a spatial resolution of 10 km, covering the Iberian Peninsula (IP), driven by the ERA40 reanalysis (1959–2001) extended by European Centre for Medium-Range Weather Forecast (ECMWF) analysis (2002–2007) and comprising two main steps. Initially, the skill of the simulation is evaluated comparing the quality-tested observational database (Lorente-Plazas et al., 2014) at local and regional scales. The results show that the model is able to portray the main features of the wind over the IP: annual cycles, wind roses, spatial and temporal variability, as well as the response to different circulation types. In addition, there is a significant added value of the simulation with respect to driving conditions, especially in regions with a complex orography. However, some problems are evident, the major drawback being the systematic overestimation of the wind speed, which is mainly attributed to a missrepresentation of frictional forces. The model skill is also lower along the Mediterranean coast and for the Pyrenees. In a second phase, the high spatio-temporal resolution of the pseudo-real wind database is used to explore the limitations of the observational database. It is shown that missing values do not affect the characterisation of the wind climate over the IP, while the length of the observational period (6 years) is sufficient for most regions, with only a few exceptions. The spatial distribution of the observational sampling schemes should be enhanced to improve the correct assessment of all IP wind regimes, particularly in some mountainous areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.