964 resultados para ENERGY-MODEL
Resumo:
A potential energy model is developed for turbulent entrainment in the absence of mean shear in a linearly stratified fluid. The relation between the entrainment distance D and the time t and the relation between dimensionless entrainment rate E and the local Richardson number are obtained. An experiment is made for examination. The experimental results are in good agreement with the model, in which the dimensionless entrainment distance D is given by DBAR = A(i)(SBAR)-1/4(fBAR)1/2(tBAR)1/8, where A(i) is the proportional coefficient, S is the dimensionless stroke, fBAR is the dimensionless frequency of the grid oscillation, tBAR the dimensionless time.
Resumo:
Size effects of mechanical behaviors of materials are referred to the variation of the mechanical behavior due to the sample sizes changing from macroscale to micro-/nanoscales. At the micro-/nanoscale, since sample has a relatively high specific surface area (SSA) (ratio of surface area to volume), the surface although it is often neglected at the macroscale, becomes prominent in governing the energy effect, although it is often neglected at the macroscale, becomes prominent in governing the mechanical behavior. In the present research, a continuum model considering the surface energy effect is developed through introducing the surface energy to total potential energy. Simultaneously, a corresponding finite element method is developed. The model is used to analyze the axial equilibrium strain problem for a Cu nanowire at the external loading-free state. As another application of the model, from dimensional analysis, the size effects of uniform compression tests on the microscale cylinder specimens for Ni and Au single crystals are analyzed and compared with experiments in literatures. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.
The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.
Resumo:
In this work we investigate if a small fraction of quarks and gluons, which escaped hadronization and survived as a uniformly spread perfect fluid, can play the role of both dark matter and dark energy. This fluid, as developed in [1], is characterized by two main parameters: beta, related to the amount of quarks and gluons which act as dark matter; and gamma, acting as the cosmological constant. We explore the feasibility of this model at cosmological scales using data from type Ia Supernovae (SNeIa), Long Gamma-Ray Bursts (LGRB) and direct observational Hubble data. We find that: (i) in general, beta cannot be constrained by SNeIa data nor by LGRB or H(z) data; (ii) gamma can be constrained quite well by all three data sets, contributing with approximate to 78% to the energy matter content; (iii) when a strong prior on (only) baryonic matter is assumed, the two parameters of the model are constrained successfully. (C) 2014 The Authors. Published by Elsevier B.V.
Resumo:
The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop models for large scale analysis of the stock. This research proposes a probabilistic, engineering-based, bottom-up model to address these issues. In a recent study we classified London's non-domestic buildings based on the service they provide, such as offices, retail premise, and schools, and proposed the creation of one probabilistic representational model per building type. This paper investigates techniques for the development of such models. The representational model is a statistical surrogate of a dynamic energy simulation (ES) model. We first identify the main parameters affecting energy consumption in a particular building sector/type by using sampling-based global sensitivity analysis methods, and then generate statistical surrogate models of the dynamic ES model within the dominant model parameters. Given a sample of actual energy consumption for that sector, we use the surrogate model to infer the distribution of model parameters by inverse analysis. The inferred distributions of input parameters are able to quantify the relative benefits of alternative energy saving measures on an entire building sector with requisite quantification of uncertainties. Secondary school buildings are used for illustrating the application of this probabilistic method. © 2012 Elsevier B.V. All rights reserved.
Resumo:
We present experimental results on benchmark problems in 3D cubic lattice structures with the Miyazawa-Jernigan energy function for two local search procedures that utilise the pull-move set: (i) population-based local search (PLS) that traverses the energy landscape with greedy steps towards (potential) local minima followed by upward steps up to a certain level of the objective function; (ii) simulated annealing with a logarithmic cooling schedule (LSA). The parameter settings for PLS are derived from short LSA-runs executed in pre-processing and the procedure utilises tabu lists generated for each member of the population. In terms of the total number of energy function evaluations both methods perform equally well, however. PLS has the potential of being parallelised with an expected speed-up in the region of the population size. Furthermore, both methods require a significant smaller number of function evaluations when compared to Monte Carlo simulations with kink-jump moves. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A biological disparity energy model can estimate local depth information by using a population of V1 complex cells. Instead of applying an analytical model which explicitly involves cell parameters like spatial frequency, orientation, binocular phase and position difference, we developed a model which only involves the cells’ responses, such that disparity can be extracted from a population code, using only a set of previously trained cells with random-dot stereograms of uniform disparity. Despite good results in smooth regions, the model needs complementary processing, notably at depth transitions. We therefore introduce a new model to extract disparity at keypoints such as edge junctions, line endings and points with large curvature. Responses of end-stopped cells serve to detect keypoints, and those of simple cells are used to detect orientations of their underlying line and edge structures. Annotated keypoints are then used in the leftright matching process, with a hierarchical, multi-scale tree structure and a saliency map to segregate disparity. By combining both models we can (re)define depth transitions and regions where the disparity energy model is less accurate.
Resumo:
Disparity energy models (DEMs) estimate local depth information on the basis ofVl complex cells. Our recent DEM (Martins et al, 2011 ISSPlT261-266) employs a population code. Once the population's cells have been trained with randorn-dot stereograms, it is applied at all retinotopic positions in the visual field. Despite producing good results in textured regions, the model needs to be made more precise, especially at depth transitions.
Resumo:
Recent developments of high-end processors recognize temperature monitoring and tuning as one of the main challenges towards achieving higher performance given the growing power and temperature constraints. To address this challenge, one needs both suitable thermal energy abstraction and corresponding instrumentation. Our model is based on application-specific parameters such as power consumption, execution time, and asymptotic temperature as well as hardware-specific parameters such as half time for thermal rise or fall. As observed with our out-of-band instrumentation and monitoring infrastructure, the temperature changes follow a relatively slow capacitor-style charge-discharge process. Therefore, we use the lumped thermal model that initiates an exponential process whenever there is a change in processor’s power consumption. Initial experiments with two codes – Firestarter and Nekbone – validate our thermal energy model and demonstrate its use for analyzing and potentially improving the application-specific balance between temperature, power, and performance.
Resumo:
We have extended the Boltzmann code CLASS and studied a specific scalar tensor dark energy model: Induced Gravity
Resumo:
Contrast sensitivity improves with the area of a sine-wave grating, but why? Here we assess this phenomenon against contemporary models involving spatial summation, probability summation, uncertainty, and stochastic noise. Using a two-interval forced-choice procedure we measured contrast sensitivity for circular patches of sine-wave gratings with various diameters that were blocked or interleaved across trials to produce low and high extrinsic uncertainty, respectively. Summation curves were steep initially, becoming shallower thereafter. For the smaller stimuli, sensitivity was slightly worse for the interleaved design than for the blocked design. Neither area nor blocking affected the slope of the psychometric function. We derived model predictions for noisy mechanisms and extrinsic uncertainty that was either low or high. The contrast transducer was either linear (c1.0) or nonlinear (c2.0), and pooling was either linear or a MAX operation. There was either no intrinsic uncertainty, or it was fixed or proportional to stimulus size. Of these 10 canonical models, only the nonlinear transducer with linear pooling (the noisy energy model) described the main forms of the data for both experimental designs. We also show how a cross-correlator can be modified to fit our results and provide a contemporary presentation of the relation between summation and the slope of the psychometric function.
Resumo:
This paper examines the implications of the EEC common energy policy for the UK energy sector as represented by a long-term programming model. The model suggests that the UK will be a substantial net exporter of energy in 1985 and will therefore make an important contribution towards the EEC's efforts to meet its import dependency target of 50% or less of gross inland consumption. Furthermore, the UK energy sector could operate within the 1985 EEC energy policy constraints with relatively low extra cost up to the year 2020 (the end of the period covered by the model). The main effect of the constraints would be to bring forward the production of synthetic gas and oil from coal.