866 resultados para Reproducing Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microwave heating reduces the preparation time and improves the adsorption quality of activated carbon. In this study, activated carbon was prepared by impregnation of palm kernel fiber with phosphoric acid followed by microwave activation. Three different types of activated carbon were prepared, having high surface areas of 872 m2 g-1, 1256 m2 g-1, and 952 m2 g-1 and pore volumes of 0.598 cc g-1, 1.010 cc g-1, and 0.778 cc g-1, respectively. The combined effects of the different process parameters, such as the initial adsorbate concentration, pH, and temperature, on adsorption efficiency were explored with the help of Box-Behnken design for response surface methodology (RSM). The adsorption rate could be expressed by a polynomial equation as the function of the independent variables. The hexavalent chromium adsorption rate was found to be 19.1 mg g-1 at the optimized conditions of the process parameters, i.e., initial concentration of 60 mg L-1, pH of 3, and operating temperature of 50 oC. Adsorption of Cr(VI) by the prepared activated carbon was spontaneous and followed second-order kinetics. The adsorption mechanism can be described by the Freundlich Isotherm model. The prepared activated carbon has demonstrated comparable performance to other available activated carbons for the adsorption of Cr(VI).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technical market indicators are tools used by technical an- alysts to understand trends in trading markets. Technical (market) indicators are often calculated in real-time, as trading progresses. This paper presents a mathematically- founded framework for calculating technical indicators. Our framework consists of a domain specific language for the un- ambiguous specification of technical indicators, and a run- time system based on Click, for computing the indicators. We argue that our solution enhances the ease of program- ming due to aligning our domain-specific language to the mathematical description of technical indicators, and that it enables executing programs in kernel space for decreased latency, without exposing the system to users’ programming errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oyster® is a surface-piercing flap-type device designed to harvest wave energy in the nearshore environment. Established mathematical theories of wave energy conversion, such as 3D point-absorber and 2D terminator theory, are inadequate to accurately describe the behaviour of Oyster, historically resulting in distorted conclusions regarding the potential of such a concept to harness the power of ocean waves. Accurately reproducing the dynamics of Oyster requires the introduction of a new reference mathematical model, the “flap-type absorber”. A flap-type absorber is a large thin device which extracts energy by pitching about a horizontal axis parallel to the ocean bottom. This paper unravels the mathematics of Oyster as a flap-type absorber. The main goals of this work are to provide a simple–yet accurate–physical interpretation of the laws governing the mechanism of wave power absorption by Oyster and to emphasise why some other, more established, mathematical theories cannot be expected to accurately describe its behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the effects of ground heterogeneity, considering permeability as a random variable, on an intruding SW wedge using Monte Carlo simulations. Random permeability fields were generated, using the method of Local Average Subdivision (LAS), based on a lognormal probability density function. The LAS method allows the creation of spatially correlated random fields, generated using coefficients of variation (COV) and horizontal and vertical scales of fluctuation (SOF). The numerical modelling code SUTRA was employed to solve the coupled flow and transport problem. The well-defined 2D dispersive Henry problem was used as the test case for the method. The intruding SW wedge is defined by two key parameters, the toe penetration length (TL) and the width of mixing zone (WMZ). These parameters were compared to the results of a homogeneous case simulated using effective permeability values. The simulation results revealed: (1) an increase in COV resulted in a seaward movement of TL; (2) the WMZ extended with increasing COV; (3) a general increase in horizontal and vertical SOF produced a seaward movement of TL, with the WMZ increasing slightly; (4) as the anisotropic ratio increased the TL intruded further inland and the WMZ reduced in size. The results show that for large values of COV, effective permeability parameters are inadequate at reproducing the effects of heterogeneity on SW intrusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Torrefaction based co-firing in a pulverized coal boiler has been proposed for large percentage of biomass co-firing. A 220 MWe pulverized coal-power plant is simulated using Aspen Plus for full understanding the impacts of an additional torrefaction unit on the efficiency of the whole power plant, the studied process includes biomass drying, biomass torrefaction, mill systems, biomass/coal devolatilization and combustion, heat exchanges and power generation. Palm kernel shells (PKS) were torrefied at same residence time but 4 different temperatures, to prepare 4 torrefied biomasses with different degrees of torrefaction. During biomass torrefaction processes, the mass loss properties and released gaseous components have been studied. In addition, process simulations at varying torrefaction degrees and biomass co-firing ratios have been carried out to understand the properties of CO2 emission and electricity efficiency in the studied torrefaction based co-firing power plant. According to the experimental results, the mole fractions of CO 2 and CO account for 69-91% and 4-27% in torrefied gases. The predicted results also showed that the electrical efficiency reduced when increasing either torrefaction temperature or substitution ratio of biomass. A deep torrefaction may not be recommended, because the power saved from biomass grinding is less than the heat consumed by the extra torrefaction process, depending on the heat sources. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A conjugate heat transfer (CHT) method was used to perform the aerothermal analysis of an internally cooled turbine vane, and was validated against experimental and empirical data.
Firstly, validation of the method with regard to internal cooling was done by reproducing heat transfer test data in a channel with pin fin heat augmenters, under steady constant wall temperature. The computed Nusselt numbers for the two tested configurations (full length circular pin fins attached to both walls and partial pin fins attached to one wall only) showed good agreement with the measurements. Sensitivity to mesh density was evaluated under this simplified case in order to establish mesh requirements for the analysis of the full component.
Secondly, the CHT method was applied onto a turbine vane test case from an actual engine. The predicted vane airfoil metal temperature was compared to the measured thermal paint data and the in-house empirical predictions. The CHT results agreed well with the thermal paint data and showed better prediction than the current empirical modeling approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many high-state non-magnetic cataclysmic variables (CVs) exhibit blueshifted absorption or P-Cygni profiles associated with ultraviolet (UV) resonance lines. These features imply the existence of powerful accretion disc winds in CVs. Here, we use our Monte Carlo ionization and radiative transfer code to investigate whether disc wind models that produce realistic UV line profiles are also likely to generate observationally significant recombination line and continuum emission in the optical waveband. We also test whether outflows may be responsible for the single-peaked emission line profiles often seen in high-state CVs and for the weakness of the Balmer absorption edge (relative to simple models of optically thick accretion discs). We find that a standard disc wind model that is successful in reproducing the UV spectra of CVs also leaves a noticeable imprint on the optical spectrum, particularly for systems viewed at high inclination. The strongest optical wind-formed recombination lines are H alpha and He ii lambda 4686. We demonstrate that a higher density outflow model produces all the expected H and He lines and produces a recombination continuum that can fill in the Balmer jump at high inclinations. This model displays reasonable verisimilitude with the optical spectrum of RW Trianguli. No single-peaked emission is seen, although we observe a narrowing of the double-peaked emission lines from the base of the wind. Finally, we show that even denser models can produce a single-peaked H alpha line. On the basis of our results, we suggest that winds can modify, and perhaps even dominate, the line and continuum emission from CVs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this chapter is three-fold: first, to explain systematically the multiple disciplines that have to be employed in the study of manuscripts; second, to review the evolution and development of methodologies used by the scholars who have shaped the present form of scholarship, and to chart outstanding problems that have yet to be resolved; and third, to offer some ideas of what future research might entail and in what way scholarship might unfold. Since numerous and disparate methodologies are employed in the study of Bach manuscripts, the discussions that follow will take nothing for granted, but will describe and define each one as it relates to understanding and reproducing, with as much accuracy as possible, Bach’s intentions in the manuscripts that contain his music.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How can GPU acceleration be obtained as a service in a cluster? This question has become increasingly significant due to the inefficiency of installing GPUs on all nodes of a cluster. The research reported in this paper is motivated to address the above question by employing rCUDA (remote CUDA), a framework that facilitates Acceleration-as-a-Service (AaaS), such that the nodes of a cluster can request the acceleration of a set of remote GPUs on demand. The rCUDA framework exploits virtualisation and ensures that multiple nodes can share the same GPU. In this paper we test the feasibility of the rCUDA framework on a real-world application employed in the financial risk industry that can benefit from AaaS in the production setting. The results confirm the feasibility of rCUDA and highlight that rCUDA achieves similar performance compared to CUDA, provides consistent results, and more importantly, allows for a single application to benefit from all the GPUs available in the cluster without loosing efficiency.