921 resultados para RANDOM-ENERGY-MODEL


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient),D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculateDdirectly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Potassium disilicate glass and melt have been investigated by using a new partial charge based potential model in which nonbridging oxygens are differentiated from bridging oxygens by their charges. The model reproduces the structural data pertaining to the coordination polyhedra around potassium and the various bond angle distributions excellently. The dynamics of the glass has been studied by using space and time correlation functions. It is found that K ions migrate by a diffusive mechanism in the melt and by hops below the glass transition temperature. They are also found to migrate largely through nonbridging oxygenrich sites in the silicate matrix, thus providing support to the predictions of the modified random network model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Potassium disilicate glass and melt have been investigated by using anew partial charge based potential model in which nonbridging oxygens are differentiated from bridging oxygens by their charges. The model reproduces the structural data pertaining to the coordination polyhedra around potassium and the various bond angle distributions excellently. The dynamics of the glass has been studied by using space and time correlation functions. It is found that K ions migrate by a diffusive mechanism in the melt and by hops below the glass transition temperature. They are also found to migrate largely through nonbridging oxygen-rich sites in the silicate matrix, thus providing support to the predictions of the modified random network model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper attempts to gain an understanding of the effect of lamellar length scale on the mechanical properties of two-phase metal-intermetallic eutectic structure. We first develop a molecular dynamics model for the in-situ grown eutectic interface followed by a model of deformation of Al-Al2Cu lamellar eutectic. Leveraging the insights obtained from the simulation on the behaviour of dislocations at different length scales of the eutectic, we present and explain the experimental results on Al-Al2Cu eutectic with various different lamellar spacing. The physics behind the mechanism is further quantified with help of atomic level energy model for different length scale as well as different strain. An atomic level energy partitioning of the lamellae and the interface regions reveals that the energy of the lamellae core are accumulated more due to dislocations irrespective of the length-scale. Whereas the energy of the interface is accumulated more due to dislocations when the length-scale is smaller, but the trend is reversed when the length-scale is large beyond a critical size of about 80 nm. (C) 2014 Author(s).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we derive an approach for the effective utilization of thermodynamic data in phase-field simulations. While the most widely used methodology for multi-component alloys is following the work by Eiken et al. (2006), wherein, an extrapolative scheme is utilized in conjunction with the TQ interface for deriving the driving force for phase transformation, a corresponding simplistic method based on the formulation of a parabolic free-energy model incorporating all the thermodynamics has been laid out for binary alloys in the work by Folch and Plapp (2005). In the following, we extend this latter approach for multi-component alloys in the framework of the grand-potential formalism. The coupling is applied for the case of the binary eutectic solidification in the Cr-Ni alloy and two-phase solidification in the ternary eutectic alloy (Al-Cr-Ni). A thermodynamic justification entails the basis of the formulation and places it in context of the bigger picture of Integrated Computational Materials Engineering. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]The generation of spikes by neurons is energetically a costly process and the evaluation of the metabolic energy required to maintain the signaling activity of neurons a challenge of practical interest. Neuron models are frequently used to represent the dynamics of real neurons but hardly ever to evaluate the electrochemical energy required to maintain that dynamics. This paper discusses the interpretation of a Hodgkin-Huxley circuit as an energy model for real biological neurons and uses it to evaluate the consumption of metabolic energy in the transmission of information between neurons coupled by electrical synapses, i.e., gap junctions. We show that for a single postsynaptic neuron maximum energy efficiency, measured in bits of mutual information per molecule of adenosine triphosphate (ATP) consumed, requires maximum energy consumption. For groups of parallel postsynaptic neurons we determine values of the synaptic conductance at which the energy efficiency of the transmission presents clear maxima at relatively very low values of metabolic energy consumption. Contrary to what could be expected, the best performance occurs at a low energy cost.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this research we focus on the Tyndall 25mm and 10mm nodes energy-aware topology management to extend sensor network lifespan and optimise node power consumption. The two tiered Tyndall Heterogeneous Automated Wireless Sensors (THAWS) tool is used to quickly create and configure application-specific sensor networks. To this end, we propose to implement a distributed route discovery algorithm and a practical energy-aware reaction model on the 25mm nodes. Triggered by the energy-warning events, the miniaturised Tyndall 10mm data collector nodes adaptively and periodically change their association to 25mm base station nodes, while 25mm nodes also change the inter-connections between themselves, which results in reconfiguration of the 25mm nodes tier topology. The distributed routing protocol uses combined weight functions to balance the sensor network traffic. A system level simulation is used to quantify the benefit of the route management framework when compared to other state of the art approaches in terms of the system power-saving.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Scepticism over stated preference surveys conducted online revolves around the concerns over “professional respondents” who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated and inferred attribute attendance and response time is taken as evidence for a link between response time and cognitive effort. Subsequently, the generalised multinomial logit model is employed to test the effect of response time on scale, which indicates the weight of the deterministic relative to the error component in the random utility model. Results show that longer response time increases scale, i.e. decreases choice randomness. This positive scale effect of response time is further found to be non-linear and wear off at some point beyond which extreme response time decreases scale. While response time does not systematically affect welfare estimates, higher response time increases the precision of such estimates. These effects persist when self-reported choice certainty is controlled for. Implications of the results for online stated preference surveys and further research are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many studies have shown that with increasing LET of ionizing radiation the RBE (relative biological effectiveness) for dsb (double strand breaks) induction remains around 1.0 despite the increase in the RBE for cell killing. This has been attributed to an increase in the complexity of lesions, classified as dsb with current techniques, at multiply damaged sites. This study determines the molecular weight distributions of DNA from Chinese hamster V79 cells irradiated with X-rays or 110 keV/mu m alpha-particles. Two running conditions for pulsed-field gel-electrophoresis were chosen to give optimal separation of fragments either in the 225 kbp-5.7 Mbp range or the 0.3 kbp to 225 kbp range. Taking the total fraction of DNA migrating into the gel as a measure of fragmentation, the RBE for dsb induction was less than 1.0 for both molecular weight regions studied. The total yields of dsb were 8.2 x 10(-9) dsb/Gy/bp for X-rays and 7.8 x 10(-9) dsb/Gy/bp for a-particles, measured using a random breakage model. Analysis of the RBE of alpha-particles versus molecular weight gave a different response. In the 0.4 Mbp-57 Mbp region the RBE was less than 1.0; however, below 0.4 Mbp the RBE increased above 1.0. The frequency distributions of fragment sizes were found to differ from those predicted by a model assuming random breakage along the length of the DNA and the differences were greater for alpha-particles than for X-rays. An excess of fragments induced by a single-hit mechanism was found in the 8-300 kbp region and for X-rays and alpha-particles these corresponded to an extra 0.8 x 10(-9) and 3.4 x 10(-9) dsb/bp/Gy, respectively. Thus for every alpha-particle track that induces a dsb there is a 44% probability of inducing a second break within 300 kbp and for electron tracks the probability is 10%. This study shows that the distribution of damage from a high LET alpha-particle track is significantly different from that observed with low LET X-rays. In particular, it suggests that the fragmentation patterns of irradiated DNA may be related to the higher-order chromatin repealing structures found in intact cells.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiuser diversity (MUDiv) is one of the central concepts in multiuser (MU) systems. In particular, MUDiv allows for scheduling among users in order to eliminate the negative effects of unfavorable channel fading conditions of some users on the system performance. Scheduling, however, consumes energy (e.g., for making users' channel state information available to the scheduler). This extra usage of energy, which could potentially be used for data transmission, can be very wasteful, especially if the number of users is large. In this paper, we answer the question of how much MUDiv is required for energy limited MU systems. Focusing on uplink MU wireless systems, we develop MU scheduling algorithms which aim at maximizing the MUDiv gain. Toward this end, we introduce a new realistic energy model which accounts for scheduling energy and describes the distribution of the total energy between scheduling and data transmission stages. Using the fact that such energy distribution can be controlled by varying the number of active users, we optimize this number by either i) minimizing the overall system bit error rate (BER) for a fixed total energy of all users in the system or ii) minimizing the total energy of all users for fixed BER requirements. We find that for a fixed number of available users, the achievable MUDiv gain can be improved by activating only a subset of users. Using asymptotic analysis and numerical simulations, we show that our approach benefits from MUDiv gains higher than that achievable by generic greedy access algorithm, which is the optimal scheduling method for energy unlimited systems. © 2010 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Approximate execution is a viable technique for energy-con\-strained environments, provided that applications have the mechanisms to produce outputs of the highest possible quality within the given energy budget.
We introduce a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows users to express the relative importance of computations for the quality of the end result, as well as minimum quality requirements. The significance-aware runtime system uses an application-specific analytical energy model to identify the degree of concurrency and approximation that maximizes quality while meeting user-specified energy constraints. Evaluation on a dual-socket 8-core server shows that the proposed
framework predicts the optimal configuration with high accuracy, enabling energy-constrained executions that result in significantly higher quality compared to loop perforation, a compiler approximation technique.