953 resultados para Computations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this article is to assess the viability of blanket sustainability policies, such as Building Rating Systems in achieving energy efficiency in university campus buildings. We analyzed the energy consumption trends of 10 LEED-certified buildings and 14 non-LEED certified buildings at a major university in the US. Energy Use Intensity (EUI) of the LEED buildings was significantly higher (EUILEED= 331.20 kBtu/sf/yr) than non-LEED buildings (EUInon-LEED=222.70 kBtu/sf/yr); however, the median EUI values were comparable (EUILEED= 172.64 and EUInon-LEED= 178.16). Because the distributions of EUI values were non-symmetrical in this dataset, both measures can be used for energy comparisons—this was also evident when EUI computations exclude outliers, EUILEED=171.82 and EUInon-LEED=195.41. Additional analyses were conducted to further explore the impact of LEED certification on university campus buildings energy performance. No statistically significant differences were observed between certified and non-certified buildings through a range of robust comparison criteria. These findings were then leveraged to devise strategies to achieve sustainable energy policies for university campus buildings and to identify potential issues with portfolio level building energy performance comparisons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UVPES studies and ab initio and DFT computations have been done on the benzene...ICl complex; electron spectral data and computed orbital energies show that donor orbitals are stabilized and acceptor orbitals are destabilized due to complexation. Calculations predict an oblique structure for the complex in which the interacting site is a C=C bond center in the donor and iodine atom in the acceptor, in full agreement with earlier experimental reports. BSSE-corrected binding energies closely match the enthalpy of complexation reported, and the NBO analysis clearly reveals the involvement of the pi orbital of benzene and the sigma* orbital of ICl in the complex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteins are polymerized by cyclic machines called ribosomes, which use their messenger RNA (mRNA) track also as the corresponding template, and the process is called translation. We explore, in depth and detail, the stochastic nature of the translation. We compute various distributions associated with the translation process; one of them-namely, the dwell time distribution-has been measured in recent single-ribosome experiments. The form of the distribution, which fits best with our simulation data, is consistent with that extracted from the experimental data. For our computations, we use a model that captures both the mechanochemistry of each individual ribosome and their steric interactions. We also demonstrate the effects of the sequence inhomogeneities of real genes on the fluctuations and noise in translation. Finally, inspired by recent advances in the experimental techniques of manipulating single ribosomes, we make theoretical predictions on the force-velocity relation for individual ribosomes. In principle, all our predictions can be tested by carrying out in vitro experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article develops a simple analytical expression that relates ion axial secular frequency to field aberration in ion trap mass spectrometers. Hexapole and octopole aberrations have been considered in the present computations. The equation of motion of the ions in a pseudopotential well with these superpositions has the form of a Duffing-like equation and a perturbation method has been used to obtain the expression for ion secular frequency as a function of field imperfections. The expression indicates that the frequency shift is sensitive to the sign of the octopole superposition and insensitive to the sign of the hexapole superposition. Further, for weak multipole superposition of the same magnitude, octopole superposition causes a larger frequency shift in comparison to hexapole superposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analytical method has been proposed to optimise the small-signaloptical gain of CO2-N2 gasdynamic lasers (gdl) employing two-dimensional (2D) wedge nozzles. Following our earlier work the equations governing the steady, inviscid, quasi-one-dimensional flow in the wedge nozzle of thegdl are reduced to a universal form so that their solutions depend on a single unifying parameter. These equations are solved numerically to obtain similar solutions for the various flow quantities, which variables are subsequently used to optimize the small-signal-gain. The corresponding optimum values like reservoir pressure and temperature and 2D nozzle area ratio also have been predicted and graphed for a wide range of laser gas compositions, with either H2O or He as the catalyst. A large number of graphs are presented which may be used to obtain the optimum values of small signal gain for a wide range of laser compositions without further computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Colour graphics subsystems can be used in a variety of applications such as high-end business graphics, low-end scientific computations, and for realtime display of process control diagrams. The design of such a subsystem is shown. This subsystem can be added to any Multibus-compatible microcomputer system. The use of an NEC 7220 graphics display controller chip has simplified the design to a considerable extent. CGRAM (CORE graphics on Multibus), a comprehensive subset of the CORE graphics standard package, is supported on the subsystem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been shown that it is possible to extend the validity of the Townsend breakdown criterion for evaluating the breakdown voltages in the complete pd range in which Paschen curves are available. Evaluation of the breakdown voltages for air (pd=0.0133 to 1400 kPa · cm), N2(pd=0.0313 to 1400 kPa · cm) and SF6 (pd=0.3000 to 1200 kPa · cm) has been done and in most cases the computed values are accurate to ±3% of the measured values. The computations show that it is also possible to estimate the secondary ionization coefficient ¿ in the pd ranges mentioned above.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regional metamorphic belts provide important constraints on the plate tectonic architecture of orogens. We report here a detailed petrologic examination of the sapphirine-bearing ultra-high temperature (UHT) granulites from the Jining Complex within the Khondalite Belt of the North China Craton (NCC). These granulites carry diagnostic UHT assemblages and their microstructures provide robust evidence to trace the prograde, peak and retrograde metamorphic evolution. The P–T conditions of the granulites estimated from XMgGrt(Mg/Fe + Mg) − XMgSpr isopleth calculations indicate temperature above 970 °C and pressures close to 7 kbar. We present phase diagrams based on thermodynamic computations to evaluate the mineral assemblages and microstructures and trace the metamorphic trajectory of the rocks. The evolution from Spl–Qtz–Ilm–Crd–Grt–Sil to Spr–Qtz–Crd–Opx–Ilm marks the prograde stage. The Spl–Qtz assemblage appears on the low-pressure side of the P–T space with Spr–Qtz stable at the high-pressure side, possibly representing an increase in pressure corresponding to compression. The spectacular development of sapphirine rims around spinel enclosed in quartz supports this inference. An evaluation of the key UHT assemblages based on model proportion calculation suggests a counterclockwise P–T path. With few exceptions, granulite-facies rocks developed along collisional metamorphic zones have generally been characterized by clockwise exhumation trajectories. Recent evaluation of the P–T paths of metamorphic rocks developed within collisional orogens indicates that in many cases the exhumation trajectories follow the model subduction geotherm, in accordance with a tectonic model in which the metamorphic rocks are subducted and exhumed along a plate boundary. The timing of UHT metamorphism in the NCC (c. 1.92 Ga) coincides with the assembly of the NCC within the Paleoproterozoic Columbia supercontinent, a process that would have involved subduction of passive margins sediments and closure of the intervening ocean. Thus, the counterclockwise P–T path obtained in this study correlates well with a tectonic model involving subduction and final collisional suturing, with the UHT granulites representing the core of the hot or ultra-hot orogen developed during Columbia amalgamation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a new criterion for designing a power-system stabiliser, which is that it should cancel the negative damping torque inherent in a synchronous generator and automatic voltage regulator. The method arises from analysis based on the properties of tensor invariance, but it is easily implemented, and leads to the design of an adaptive controller. Extensive computations and simulation have been performed, and laboratory tests have been conducted on a computer-controlled micromachine system. Results are presented illustrating the effectiveness of the adaptive stabiliser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop compilation techniques for the realization of applications described in a High Level Language (HLL) onto a Runtime Reconfigurable Architecture. The compiler determines Hyper Operations (HyperOps) that are subgraphs of a data flow graph (of an application) and comprise elementary operations that have strong producer-consumer relationship. These HyperOps are hosted on computation structures that are provisioned on demand at runtime. We also report compiler optimizations that collectively reduce the overheads of data-driven computations in runtime reconfigurable architectures. On an average, HyperOps offer a 44% reduction in total execution time and a 18% reduction in management overheads as compared to using basic blocks as coarse grained operations. We show that HyperOps formed using our compiler are suitable to support data flow software pipelining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned the calculation of flame structure of one-dimensional laminar premixed flames using the technique of operator-splitting. The technique utilizes an explicit method of solution with one step Euler for chemistry and a novel probabilistic scheme for diffusion. The relationship between diffusion phenomenon and Gauss-Markoff process is exploited to obtain an unconditionally stable explicit difference scheme for diffusion. The method has been applied to (a) a model problem, (b) hydrazine decomposition, (c) a hydrogen-oxygen system with 28 reactions with constant Dρ 2 approximation, and (d) a hydrogen-oxygen system (28 reactions) with trace diffusion approximation. Certain interesting aspects of behaviour of the solution with non-unity Lewis number are brought out in the case of hydrazine flame. The results of computation in the most complex case are shown to compare very favourably with those of Warnatz, both in terms of accuracy of results as well as computational time, thus showing that explicit methods can be effective in flame computations. Also computations using the Gear-Hindmarsh for chemistry and the present approach for diffusion have been carried out and comparison of the two methods is presented.