877 resultados para Axiomatic Models of Resource Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show by numerical simulations that discretized versions of commonly studied continuum nonlinear growth equations (such as the Kardar-Parisi-Zhangequation and the Lai-Das Sarma-Villain equation) and related atomistic models of epitaxial growth have a generic instability in which isolated pillars (or grooves) on an otherwise flat interface grow in time when their height (or depth) exceeds a critical value. Depending on the details of the model, the instability found in the discretized version may or may not be present in the truly continuum growth equation, indicating that the behavior of discretized nonlinear growth equations may be very different from that of their continuum counterparts. This instability can be controlled either by the introduction of higher-order nonlinear terms with appropriate coefficients or by restricting the growth of pillars (or grooves) by other means. A number of such ''controlled instability'' models are studied by simulation. For appropriate choice of the parameters used for controlling the instability, these models exhibit intermittent behavior, characterized by multiexponent scaling of height fluctuations, over the time interval during which the instability is active. The behavior found in this regime is very similar to the ''turbulent'' behavior observed in recent simulations of several one- and two-dimensional atomistic models of epitaxial growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term , based on the condition that the instantaneous fluctuation of excess temperature is small. With this model, it is shown that the CMC equations describe the autoignition process all the way up to near the equilibrium limit. The effect of second-order terms (namely, conditional variance of temperature excess sigma(2) and conditional correlations of species q(ij)) in modeling is examined. Comparison with DNS data shows that sigma(2) has little effect on the predicted conditional mean temperature evolution, if the average conditional scalar dissipation rate is properly modeled. Using DNS data, a correction factor is introduced in the modeling of nonlinear terms to include the effect of species fluctuations. Computations including such a correction factor show that the species conditional correlations q(ij) have little effect on model predictions with a one-step reaction, but those q(ij) involving intermediate species are found to be crucial when four-step reduced kinetics is considered. The "most reactive mixture fraction" is found to vary with time when a four-step kinetics is considered. First-order CMC results are found to be qualitatively wrong if the conditional mean scalar dissipation rate is not modeled properly. The autoignition delay time predicted by the CMC model compares excellently with DNS results and shows a trend similar to experimental data over a range of initial temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural network models of associative memory exhibit a large number of spurious attractors of the network dynamics which are not correlated with any memory state. These spurious attractors, analogous to "glassy" local minima of the energy or free energy of a system of particles, degrade the performance of the network by trapping trajectories starting from states that are not close to one of the memory states. Different methods for reducing the adverse effects of spurious attractors are examined with emphasis on the role of synaptic asymmetry. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since a universally accepted dynamo model of grand minima does not exist at the present time, we concentrate on the physical processes which may be behind the grand minima. After summarizing the relevant observational data, we make the point that, while the usual sources of irregularities of solar cycles may be sufficient to cause a grand minimum, the solar dynamo has to operate somewhat differently from the normal to bring the Sun out of the grand minimum. We then consider three possible sources of irregularities in the solar dynamo: (i) nonlinear effects; (ii) fluctuations in the poloidal field generation process; (iii) fluctuations in the meridional circulation. We conclude that (i) is unlikely to be the cause behind grand minima, but a combination of (ii) and (iii) may cause them. If fluctuations make the poloidal field fall much below the average or make the meridional circulation significantly weaker, then the Sun may be pushed into a grand minimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In animal populations, the constraints of energy and time can cause intraspecific variation in foraging behaviour. The proximate developmental mediators of such variation are often the mechanisms underlying perception and associative learning. Here, experience-dependent changes in foraging behaviour and their consequences were investigated in an urban population of free-ranging dogs, Canis familiaris by continually challenging them with the task of food extraction from specially crafted packets. Typically, males and pregnant/lactating (PL) females extracted food using the sophisticated `gap widening' technique, whereas non-pregnant/non-lactating (NPNL) females, the relatively underdeveloped `rip opening' technique. In contrast to most males and PL females (and a few NPNL females) that repeatedly used the gap widening technique and improved their performance in food extraction with experience, most NPNL females (and a few males and PL females) non-preferentially used the two extraction techniques and did not improve over successive trials. Furthermore, the ability of dogs to sophisticatedly extract food was positively related to their ability to improve their performance with experience. Collectively, these findings demonstrate that factors such as sex and physiological state can cause differences among individuals in the likelihood of learning new information and hence, in the rate of resource acquisition and monopolization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a comparative evaluation of the average and switching models of a dc-dc boost converter from the point of view of real-time simulation. Both the models are used to simulate the converter in real-time on a Field Programmable Gate Array (FPGA) platform. The converter is considered to function over a wide range of operating conditions, and could do transition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM). While the average model is known to be computationally efficient from the perspective of off-line simulation, the same is shown here to consume more logical resources than the switching model for real-time simulation of the dc-dc converter. Further, evaluation of the boundary condition between CCM and DCM is found to be the main reason for the increased consumption of resources by the average model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In metropolitan cities, public transportation service plays a vital role in mobility of people, and it has to introduce new routes more frequently due to the fast development of the city in terms of population growth and city size. Whenever there is introduction of new route or increase in frequency of buses, the nonrevenue kilometers covered by the buses increases as depot and route starting/ending points are at different places. This non-revenue kilometers or dead kilometers depends on the distance between depot and route starting point/ending point. The dead kilometers not only results in revenue loss but also results in an increase in the operating cost because of the extra kilometers covered by buses. Reduction of dead kilometers is necessary for the economic growth of the public transportation system. Therefore, in this study, the attention is focused on minimizing dead kilometers by optimizing allocation of buses to depots depending upon the shortest distance between depot and route starting/ending points. We consider also depot capacity and time period of operation during allocation of buses to ensure parking safety and proper maintenance of buses. Mathematical model is developed considering the aforementioned parameters, which is a mixed integer program, and applied to Bangalore Metropolitan Transport Corporation (BMTC) routes operating presently in order to obtain optimal bus allocation to depots. Database for dead kilometers of depots in BMTC for all the schedules are generated using the Form-4 (trip sheet) of each schedule to analyze depot-wise and division-wise dead kilometers. This study also suggests alternative locations where depots can be located to reduce dead kilometers. Copyright (C) 2015 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper intends to provide an overview of the rich legacy of models and theories that have emerged in the last fifty years of the relatively young discipline of design research, and identifies some of the major areas of further research. It addresses the following questions: What are the major theories and models of design? How are design theory and model defined, and what is their purpose? What are the criteria they must satisfy to be considered a design theory or model? How should a theory or model of design be evaluated or validated? What are the major directions for further research?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To examine the role of androgens on birth weight in genetic models of altered androgen signalling. SETTING: Cambridge Disorders of Sex Development (DSD) database and the Swedish national screening programme for congenital adrenal hyperplasia (CAH). PATIENTS: (1) 29 girls with XY karyotype and mutation positive complete androgen insensitivity syndrome (CAIS); (2) 43 girls and 30 boys with genotype confirmed CAH. MAIN OUTCOME MEASURES: Birth weight, birth weight-for-gestational-age (birth weight standard deviation score (SDS)) calculated by comparison with national references. RESULTS: Mean birth weight SDS in CAIS XY infants was higher than the reference for girls (mean, 95% CI: 0.4, 0.1 to 0.7; p=0.02) and was similar to the national reference for boys (0.1, -0.2 to 0.4). Birth weight SDS in CAH girls was similar to the national reference for girls (0.0, -0.2 to 0.2) and did not vary by severity of gene mutation. Birth weight SDS in CAH boys was also similar to the national reference for boys (0.2, -0.2 to 0.6). CONCLUSION: CAIS XY infants have a birth weight distribution similar to normal male infants and birth weight is not increased in infants with CAH. Alterations in androgen signalling have little impact on birth weight. Sex dimorphism in birth size is unrelated to prenatal androgen exposure.