955 resultados para Boolean Computations
Resumo:
Cells are the fundamental building block of plant based food materials and many of the food processing born structural changes can fundamentally be derived as a function of the deformations of the cellular structure. In food dehydration the bulk level changes in porosity, density and shrinkage can be better explained using cellular level deformations initiated by the moisture removal from the cellular fluid. A novel approach is used in this research to model the cell fluid with Smoothed Particle Hydrodynamics (SPH) and cell walls with Discrete Element Methods (DEM), that are fundamentally known to be robust in treating complex fluid and solid mechanics. High Performance Computing (HPC) is used for the computations due to its computing advantages. Comparing with the deficiencies of the state of the art drying models, the current model is found to be robust in replicating drying mechanics of plant based food materials in microscale.
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
Resumo:
This work describes recent extensions to the GPFlow scientific workflow system in development at MQUTeR (www.mquter.qut.edu.au), which facilitate interactive experimentation, automatic lifting of computations from single-case to collection-oriented computation and automatic correlation and synthesis of collections. A GPFlow workflow presents as an acyclic data flow graph, yet provides powerful iteration and collection formation capabilities.
Resumo:
The present study investigated the behavioral and neuropsychological characteristics of decision-making behavior during a gambling task as well as how these characteristics may relate to the Somatic Marker Hypothesis and the Frequency of Gain model. The applicability to intertemporal choice was also discussed. Patterns of card selection during a computerized interpretation of the Iowa Gambling Task were assessed for 10 men and 10 women. Steady State Topography was employed to assess cortical processing throughout this task. Results supported the hypothesis that patterns of card selection were in line with both theories. As hypothesized, these 2 patterns of card selection were also associated with distinct patterns of cortical activity, suggesting that intertemporal choice may involve the recruitment of right dorsolateral prefrontal cortex for somatic labeling, left fusiform gyrus for object representations, and the left dorsolateral prefrontal cortex for an analysis of the associated frequency of gain or loss. It is suggested that processes contributing to intertemporal choice may include inhibition of negatively valenced options, guiding decisions away from those options, as well as computations favoring frequently rewarded options.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
The purpose of this article is to assess the viability of blanket sustainability policies, such as Building Rating Systems in achieving energy efficiency in university campus buildings. We analyzed the energy consumption trends of 10 LEED-certified buildings and 14 non-LEED certified buildings at a major university in the US. Energy Use Intensity (EUI) of the LEED buildings was significantly higher (EUILEED= 331.20 kBtu/sf/yr) than non-LEED buildings (EUInon-LEED=222.70 kBtu/sf/yr); however, the median EUI values were comparable (EUILEED= 172.64 and EUInon-LEED= 178.16). Because the distributions of EUI values were non-symmetrical in this dataset, both measures can be used for energy comparisons—this was also evident when EUI computations exclude outliers, EUILEED=171.82 and EUInon-LEED=195.41. Additional analyses were conducted to further explore the impact of LEED certification on university campus buildings energy performance. No statistically significant differences were observed between certified and non-certified buildings through a range of robust comparison criteria. These findings were then leveraged to devise strategies to achieve sustainable energy policies for university campus buildings and to identify potential issues with portfolio level building energy performance comparisons.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
UVPES studies and ab initio and DFT computations have been done on the benzene...ICl complex; electron spectral data and computed orbital energies show that donor orbitals are stabilized and acceptor orbitals are destabilized due to complexation. Calculations predict an oblique structure for the complex in which the interacting site is a C=C bond center in the donor and iodine atom in the acceptor, in full agreement with earlier experimental reports. BSSE-corrected binding energies closely match the enthalpy of complexation reported, and the NBO analysis clearly reveals the involvement of the pi orbital of benzene and the sigma* orbital of ICl in the complex.
Resumo:
Proteins are polymerized by cyclic machines called ribosomes, which use their messenger RNA (mRNA) track also as the corresponding template, and the process is called translation. We explore, in depth and detail, the stochastic nature of the translation. We compute various distributions associated with the translation process; one of them-namely, the dwell time distribution-has been measured in recent single-ribosome experiments. The form of the distribution, which fits best with our simulation data, is consistent with that extracted from the experimental data. For our computations, we use a model that captures both the mechanochemistry of each individual ribosome and their steric interactions. We also demonstrate the effects of the sequence inhomogeneities of real genes on the fluctuations and noise in translation. Finally, inspired by recent advances in the experimental techniques of manipulating single ribosomes, we make theoretical predictions on the force-velocity relation for individual ribosomes. In principle, all our predictions can be tested by carrying out in vitro experiments.
Resumo:
This article develops a simple analytical expression that relates ion axial secular frequency to field aberration in ion trap mass spectrometers. Hexapole and octopole aberrations have been considered in the present computations. The equation of motion of the ions in a pseudopotential well with these superpositions has the form of a Duffing-like equation and a perturbation method has been used to obtain the expression for ion secular frequency as a function of field imperfections. The expression indicates that the frequency shift is sensitive to the sign of the octopole superposition and insensitive to the sign of the hexapole superposition. Further, for weak multipole superposition of the same magnitude, octopole superposition causes a larger frequency shift in comparison to hexapole superposition.
Resumo:
Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.
Resumo:
Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.
Resumo:
A unit cube in k-dimension (or a k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k, where each R-i is a closed interval on the real line of the form [a(j), a(i), + 1]. The cubicity of G, denoted as cub(G), is the minimum k such that G is the intersection graph of a collection of k-cubes. Many NP-complete graph problems can be solved efficiently or have good approximation ratios in graphs of low cubicity. In most of these cases the first step is to get a low dimensional cube representation of the given graph. It is known that for graph G, cub(G) <= left perpendicular2n/3right perpendicular. Recently it has been shown that for a graph G, cub(G) >= 4(Delta + 1) In n, where n and Delta are the number of vertices and maximum degree of G, respectively. In this paper, we show that for a bipartite graph G = (A boolean OR B, E) with |A| = n(1), |B| = n2, n(1) <= n(2), and Delta' = min {Delta(A),Delta(B)}, where Delta(A) = max(a is an element of A)d(a) and Delta(B) = max(b is an element of B) d(b), d(a) and d(b) being the degree of a and b in G, respectively , cub(G) <= 2(Delta' + 2) bar left rightln n(2)bar left arrow. We also give an efficient randomized algorithm to construct the cube representation of G in 3 (Delta' + 2) bar right arrowIn n(2)bar left arrow dimension. The reader may note that in general Delta' can be much smaller than Delta.
Resumo:
An analytical method has been proposed to optimise the small-signaloptical gain of CO2-N2 gasdynamic lasers (gdl) employing two-dimensional (2D) wedge nozzles. Following our earlier work the equations governing the steady, inviscid, quasi-one-dimensional flow in the wedge nozzle of thegdl are reduced to a universal form so that their solutions depend on a single unifying parameter. These equations are solved numerically to obtain similar solutions for the various flow quantities, which variables are subsequently used to optimize the small-signal-gain. The corresponding optimum values like reservoir pressure and temperature and 2D nozzle area ratio also have been predicted and graphed for a wide range of laser gas compositions, with either H2O or He as the catalyst. A large number of graphs are presented which may be used to obtain the optimum values of small signal gain for a wide range of laser compositions without further computations.