67 resultados para Symbolic computation and algebraic computation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

As the building industry proceeds in the direction of low impact buildings, research attention is being drawn towards the reduction of carbon dioxide emission and waste. Starting from design and construction to operation and demolition, various building materials are used throughout the whole building lifecycle involving significant energy consumption and waste generation. Building Information Modelling (BIM) is emerging as a tool that can support holistic design-decision making for reducing embodied carbon and waste production in the building lifecycle. This study aims to establish a framework for assessing embodied carbon and waste underpinned by BIM technology. On the basis of current research review, the framework is considered to include functional modules for embodied carbon computation. There are a module for waste estimation, a knowledge-base of construction and demolition methods, a repository of building components information, and an inventory of construction materials’ energy and carbon. Through both static 3D model visualisation and dynamic modelling supported by the framework, embodied energy (carbon), waste and associated costs can be analysed in the boundary of cradle-to-gate, construction, operation, and demolition. The proposed holistic modelling framework provides a possibility to analyse embodied carbon and waste from different building lifecycle perspectives including associated costs. It brings together existing segmented embodied carbon and waste estimation into a unified model, so that interactions between various parameters through the different building lifecycle phases can be better understood. Thus, it can improve design-decision support for optimal low impact building development. The applicability of this framework is anticipated being developed and tested on industrial projects in the near future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper investigates the impact of price consciousness, perceived risk, and ethical obligation on attitude and intention towards counterfeit products. Data were collected from a sample of 200 respondents via an online questionnaire. A conceptual model was derived and tested via structural equation modelling in the contexts of symbolic and experiential counterfeit products. Findings show differences in the factors (and weight thereof) impacting attitude and purchase intention in the two product contexts. Specifically, ethical obligation and perceived risk are found to be significant predictors of attitude towards both symbolic and counterfeit products, while price consciousness is found to predict only attitude towards experiential products, but not purchase intention in either counterfeit product context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The internal variability and coupling between the stratosphere and troposphere in CCMVal‐2 chemistry‐climate models are evaluated through analysis of the annular mode patterns of variability. Computation of the annular modes in long data sets with secular trends requires refinement of the standard definition of the annular mode, and a more robust procedure that allows for slowly varying trends is established and verified. The spatial and temporal structure of the models’ annular modes is then compared with that of reanalyses. As a whole, the models capture the key features of observed intraseasonal variability, including the sharp vertical gradients in structure between stratosphere and troposphere, the asymmetries in the seasonal cycle between the Northern and Southern hemispheres, and the coupling between the polar stratospheric vortices and tropospheric midlatitude jets. It is also found that the annular mode variability changes little in time throughout simulations of the 21st century. There are, however, both common biases and significant differences in performance in the models. In the troposphere, the annular mode in models is generally too persistent, particularly in the Southern Hemisphere summer, a bias similar to that found in CMIP3 coupled climate models. In the stratosphere, the periods of peak variance and coupling with the troposphere are delayed by about a month in both hemispheres. The relationship between increased variability of the stratosphere and increased persistence in the troposphere suggests that some tropospheric biases may be related to stratospheric biases and that a well‐simulated stratosphere can improve simulation of tropospheric intraseasonal variability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We explore the influence of the choice of attenuation factor on Katz centrality indices for evolving communication networks. For given snapshots of a network observed over a period of time, recently developed communicability indices aim to identify best broadcasters and listeners in the network. In this article, we looked into the sensitivity of communicability indices on the attenuation factor constraint, in relation to spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We proposed relaxed communicability measures where the spectral radius bound on attenuation factor is relaxed and the adjacency matrix is normalised in order to maintain the convergence of the measure. Using a vitality based measure of both standard and relaxed communicability indices we looked at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We illustrated our findings with two examples of real-life networks, MIT reality mining data set of daily communications between 106 individuals during one year and UK Twitter mentions network, direct messages on Twitter between 12.4k individuals during one week.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The complete details of our calculation of the NLO QCD corrections to heavy flavor photo- and hadroproduction with longitudinally polarized initial states are presented. The main motivation for investigating these processes is the determination of the polarized gluon density at the COMPASS and RHIC experiments, respectively, in the near future. All methods used in the computation are extensively documented, providing a self-contained introduction to this type of calculations. Some employed tools also may be of general interest, e.g., the series expansion of hypergeometric functions. The relevant parton level results are collected and plotted in the form of scaling functions. However, the simplification of the obtained gluon-gluon virtual contributions has not been completed yet. Thus NLO phenomenological predictions are only given in the case of photoproduction. The theoretical uncertainties of these predictions, in particular with respect to the heavy quark mass, are carefully considered. Also it is shown that transverse momentum cuts can considerably enhance the measured production asymmetries. Finally unpolarized heavy quark production is reviewed in order to derive conditions for a successful interpretation of future spin-dependent experimental data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Explanations of the marked individual differences in elementary school mathematical achievement and mathematical learning disability (MLD or dyscalculia) have involved domain-general factors (working memory, reasoning, processing speed and oral language) and numerical factors that include single-digit processing efficiency and multi-digit skills such as number system knowledge and estimation. This study of third graders (N = 258) finds both domain-general and numerical factors contribute independently to explaining variation in three significant arithmetic skills: basic calculation fluency, written multi-digit computation, and arithmetic word problems. Estimation accuracy and number system knowledge show the strongest associations with every skill and their contributions are both independent of each other and other factors. Different domain-general factors independently account for variation in each skill. Numeral comparison, a single digit processing skill, uniquely accounts for variation in basic calculation. Subsamples of children with MLD (at or below 10th percentile, n = 29) are compared with low achievement (LA, 11th to 25th percentiles, n = 42) and typical achievement (above 25th percentile, n = 187). Examination of these and subsets with persistent difficulties supports a multiple deficits view of number difficulties: most children with number difficulties exhibit deficits in both domain-general and numerical factors. The only factor deficit common to all persistent MLD children is in multi-digit skills. These findings indicate that many factors matter but multi-digit skills matter most in third grade mathematical achievement.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a dynamic causal model that can explain context-dependent changes in neural responses, in the rat barrel cortex, to an electrical whisker stimulation at different frequencies. Neural responses were measured in terms of local field potentials. These were converted into current source density (CSD) data, and the time series of the CSD sink was extracted to provide a time series response train. The model structure consists of three layers (approximating the responses from the brain stem to the thalamus and then the barrel cortex), and the latter two layers contain nonlinearly coupled modules of linear second-order dynamic systems. The interaction of these modules forms a nonlinear regulatory system that determines the temporal structure of the neural response amplitude for the thalamic and cortical layers. The model is based on the measured population dynamics of neurons rather than the dynamics of a single neuron and was evaluated against CSD data from experiments with varying stimulation frequency (1–40 Hz), random pulse trains, and awake and anesthetized animals. The model parameters obtained by optimization for different physiological conditions (anesthetized or awake) were significantly different. Following Friston, Mechelli, Turner, and Price (2000), this work is part of a formal mathematical system currently being developed (Zheng et al., 2005) that links stimulation to the blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) signal through neural activity and hemodynamic variables. The importance of the model described here is that it can be used to invert the hemodynamic measurements of changes in blood flow to estimate the underlying neural activity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick