54 resultados para kernel estimators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In astrophysical systems, radiation-matter interactions are important in transferring energy and momentum between the radiation field and the surrounding material. This coupling often makes it necessary to consider the role of radiation when modelling the dynamics of astrophysical fluids. During the last few years, there have been rapid developments in the use of Monte Carlo methods for numerical radiative transfer simulations. Here, we present an approach to radiation hydrodynamics that is based on coupling Monte Carlo radiative transfer techniques with finite-volume hydrodynamical methods in an operator-split manner. In particular, we adopt an indivisible packet formalism to discretize the radiation field into an ensemble of Monte Carlo packets and employ volume-based estimators to reconstruct the radiation field characteristics. In this paper the numerical tools of this method are presented and their accuracy is verified in a series of test calculations. Finally, as a practical example, we use our approach to study the influence of the radiation-matter coupling on the homologous expansion phase and the bolometric light curve of Type Ia supernova explosions. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A three-dimensional Monte Carlo code for modelling radiation transport in Type Ia supernovae is described. In addition to tracking Monte Carlo quanta to follow the emission, scattering and deposition of radiative energy, a scheme involving volume-based Monte Carlo estimators is used to allow properties of the emergent radiation field to be extracted for specific viewing angles in a multidimensional structure. This eliminates the need to compute spectra or light curves by angular binning of emergent quanta. The code is applied to two test problems to illustrate consequences of multidimensional structure on the modelling of light curves. First, elliptical models are used to quantify how large-scale asphericity can introduce angular dependence to light curves. Secondly, a model which incorporates complex structural inhomogeneity, as predicted by modern explosion models, is used to investigate how such structure may affect light-curve properties. © 2006 RAS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present results for a suite of 14 three-dimensional, high-resolution hydrodynamical simulations of delayed-detonation models of Type Ia supernova (SN Ia) explosions. This model suite comprises the first set of three-dimensional SN Ia simulations with detailed isotopic yield information. As such, it may serve as a data base for Chandrasekhar-mass delayed-detonation model nucleosynthetic yields and for deriving synthetic observables such as spectra and light curves. We employ aphysically motivated, stochastic model based on turbulent velocity fluctuations and fuel density to calculate in situ the deflagration-to-detonation transition probabilities. To obtain different strengths of the deflagration phase and thereby different degrees of pre-expansion, we have chosen a sequence of initial models with 1, 3, 5, 10, 20, 40, 100, 150, 200, 300 and 1600 (two different realizations) ignition kernels in a hydrostatic white dwarf with a central density of 2.9 × 10 g cm, as well as one high central density (5.5 × 10 g cm) and one low central density (1.0 × 10 g cm) rendition of the 100 ignition kernel configuration. For each simulation, we determined detailed nucleosynthetic yields by postprocessing10 tracer particles with a 384 nuclide reaction network. All delayed-detonation models result in explosions unbinding thewhite dwarf, producing a range of 56Ni masses from 0.32 to 1.11M. As a general trend, the models predict that the stableneutron-rich iron-group isotopes are not found at the lowest velocities, but rather at intermediate velocities (~3000×10 000 km s) in a shell surrounding a Ni-rich core. The models further predict relatively low-velocity oxygen and carbon, with typical minimum velocities around 4000 and 10 000 km s, respectively. © 2012 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article investigates to what extent the worldwide increase in body mass index (BMI) has been affected by economic globalization and inequality. We used time-series and longitudinal cross-national analysis of 127 countries from 1980 to 2008. Data on mean adult BMI were obtained from the Global Burden of Metabolic Risk Factors of Chronic Diseases Collaborating Group. Globalization was measured using the Swiss Economic Institute (KOF) index of economic globalization. Economic inequality between countries was measured with the mean difference in gross domestic product per capita purchasing power parity in international dollars. Economic inequality within countries was measured using the Gini index from the Standardized World Income Inequality Database. Other covariates including poverty, population size, urban population, openness to trade and foreign direct investment were taken from the World Development Indicators (WDI) database. Time-series regression analyses showed that the global increase in BMI is positively associated with both the index of economic globalization and inequality between countries, after adjustment for covariates. Longitudinal panel data analyses showed that the association between economic globalization and BMI is robust after controlling for all covariates and using different estimators. The association between economic inequality within countries and BMI, however, was significant only among high-income nations. More research is needed to study the pathways between economic globalization and BMI. These findings, however, contribute to explaining how contemporary globalization can be reformed to promote better health and control the global obesity epidemic. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary: We present a new R package, diveRsity, for the calculation of various diversity statistics, including common diversity partitioning statistics (?, G) and population differentiation statistics (D, GST ', ? test for population heterogeneity), among others. The package calculates these estimators along with their respective bootstrapped confidence intervals for loci, sample population pairwise and global levels. Various plotting tools are also provided for a visual evaluation of estimated values, allowing users to critically assess the validity and significance of statistical tests from a biological perspective. diveRsity has a set of unique features, which facilitate the use of an informed framework for assessing the validity of the use of traditional F-statistics for the inference of demography, with reference to specific marker types, particularly focusing on highly polymorphic microsatellite loci. However, the package can be readily used for other co-dominant marker types (e.g. allozymes, SNPs). Detailed examples of usage and descriptions of package capabilities are provided. The examples demonstrate useful strategies for the exploration of data and interpretation of results generated by diveRsity. Additional online resources for the package are also described, including a GUI web app version intended for those with more limited experience using R for statistical analysis. © 2013 British Ecological Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let X be a connected, noetherian scheme and A{script} be a sheaf of Azumaya algebras on X, which is a locally free O{script}-module of rank a. We show that the kernel and cokernel of K(X) ? K(A{script}) are torsion groups with exponent a for some m and any i = 0, when X is regular or X is of dimension d with an ample sheaf (in this case m = d + 1). As a consequence, K(X, Z/m) ? K(A{script}, Z/m), for any m relatively prime to a. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microwave heating reduces the preparation time and improves the adsorption quality of activated carbon. In this study, activated carbon was prepared by impregnation of palm kernel fiber with phosphoric acid followed by microwave activation. Three different types of activated carbon were prepared, having high surface areas of 872 m2 g-1, 1256 m2 g-1, and 952 m2 g-1 and pore volumes of 0.598 cc g-1, 1.010 cc g-1, and 0.778 cc g-1, respectively. The combined effects of the different process parameters, such as the initial adsorbate concentration, pH, and temperature, on adsorption efficiency were explored with the help of Box-Behnken design for response surface methodology (RSM). The adsorption rate could be expressed by a polynomial equation as the function of the independent variables. The hexavalent chromium adsorption rate was found to be 19.1 mg g-1 at the optimized conditions of the process parameters, i.e., initial concentration of 60 mg L-1, pH of 3, and operating temperature of 50 oC. Adsorption of Cr(VI) by the prepared activated carbon was spontaneous and followed second-order kinetics. The adsorption mechanism can be described by the Freundlich Isotherm model. The prepared activated carbon has demonstrated comparable performance to other available activated carbons for the adsorption of Cr(VI).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technical market indicators are tools used by technical an- alysts to understand trends in trading markets. Technical (market) indicators are often calculated in real-time, as trading progresses. This paper presents a mathematically- founded framework for calculating technical indicators. Our framework consists of a domain specific language for the un- ambiguous specification of technical indicators, and a run- time system based on Click, for computing the indicators. We argue that our solution enhances the ease of program- ming due to aligning our domain-specific language to the mathematical description of technical indicators, and that it enables executing programs in kernel space for decreased latency, without exposing the system to users’ programming errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modeling problems require to estimate a scalar output from one or more time series. Such problems are usually tackled by extracting a fixed number of features from the time series (like their statistical moments), with a consequent loss in information that leads to suboptimal predictive models. Moreover, feature extraction techniques usually make assumptions that are not met by real world settings (e.g. uniformly sampled time series of constant length), and fail to deliver a thorough methodology to deal with noisy data. In this paper a methodology based on functional learning is proposed to overcome the aforementioned problems; the proposed Supervised Aggregative Feature Extraction (SAFE) approach allows to derive continuous, smooth estimates of time series data (yielding aggregate local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The SAFE paradigm enjoys several properties like closed form solution, incorporation of first and second order derivative information into the regressor matrix, interpretability of the generated functional predictor and the possibility to exploit Reproducing Kernel Hilbert Spaces setting to yield nonlinear predictive models. Simulation studies are provided to highlight the strengths of the new methodology w.r.t. standard unsupervised feature selection approaches. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Torrefaction based co-firing in a pulverized coal boiler has been proposed for large percentage of biomass co-firing. A 220 MWe pulverized coal-power plant is simulated using Aspen Plus for full understanding the impacts of an additional torrefaction unit on the efficiency of the whole power plant, the studied process includes biomass drying, biomass torrefaction, mill systems, biomass/coal devolatilization and combustion, heat exchanges and power generation. Palm kernel shells (PKS) were torrefied at same residence time but 4 different temperatures, to prepare 4 torrefied biomasses with different degrees of torrefaction. During biomass torrefaction processes, the mass loss properties and released gaseous components have been studied. In addition, process simulations at varying torrefaction degrees and biomass co-firing ratios have been carried out to understand the properties of CO2 emission and electricity efficiency in the studied torrefaction based co-firing power plant. According to the experimental results, the mole fractions of CO 2 and CO account for 69-91% and 4-27% in torrefied gases. The predicted results also showed that the electrical efficiency reduced when increasing either torrefaction temperature or substitution ratio of biomass. A deep torrefaction may not be recommended, because the power saved from biomass grinding is less than the heat consumed by the extra torrefaction process, depending on the heat sources. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers inference from multinomial data and addresses the problem of choosing the strength of the Dirichlet prior under a mean-squared error criterion. We compare the Maxi-mum Likelihood Estimator (MLE) and the most commonly used Bayesian estimators obtained by assuming a prior Dirichlet distribution with non-informative prior parameters, that is, the parameters of the Dirichlet are equal and altogether sum up to the so called strength of the prior. Under this criterion, MLE becomes more preferable than the Bayesian estimators at the increase of the number of categories k of the multinomial, because non-informative Bayesian estimators induce a region where they are dominant that quickly shrinks with the increase of k. This can be avoided if the strength of the prior is not kept constant but decreased with the number of categories. We argue that the strength should decrease at least k times faster than usual estimators do.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.