849 resultados para CELLULAR SCALING RULES


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human brain is often considered to be the most cognitively capable among mammalian brains and to be much larger than expected for a mammal of our body size. Although the number of neurons is generally assumed to be a determinant of computational power, and despite the widespread quotes that the human brain contains 100 billion neurons and ten times more glial cells, the absolute number of neurons and glial cells in the human brain remains unknown. Here we determine these numbers by using the isotropic fractionator and compare them with the expected values for a human-sized primate. We find that the adult male human brain contains on average 86.1 +/- 8.1 billion NeuN-positive cells (""neurons"") and 84.6 +/- 9.8 billion NeuN-negative (""nonneuronal"") cells. With only 19% of all neurons located in the cerebral cortex, greater cortical size (representing 82% of total brain mass) in humans compared with other primates does not reflect an increased relative number of cortical neurons. The ratios between glial cells and neurons in the human brain structures are similar to those found in other primates, and their numbers of cells match those expected for a primate of human proportions. These findings challenge the common view that humans stand out from other primates in their brain composition and indicate that, with regard to numbers of neuronal and nonneuronal cells, the human brain is an isometrically scaled-up primate brain. J. Comp. Neurol. 513:532-541, 2009. (c) 2009 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data are reported on the background and performance of the K6 screening scale for serious mental illness (SMI) in the World Health Organization (WHO) World Mental Health (WMH) surveys. The K6 is a six-item scale developed to provide a brief valid screen for Diagnostic and Statistical Manual of Mental Disorders 4th edition (DSM-IV) SMI based on the criteria in the US ADAMHA Reorganization Act. Although methodological studies have documented good K6 validity in a number of countries, optimal scoring rules have never been proposed. Such rules are presented here based on analysis of K6 data in nationally or regionally representative WMH surveys in 14 countries (combined N = 41,770 respondents). Twelve-month prevalence of DSM-IV SMI was assessed with the fully-structured WHO Composite International Diagnostic Interview. Nested logistic regression analysis was used to generate estimates of the predicted probability of SMI for each respondent from K6 scores, taking into consideration the possibility of variable concordance as a function of respondent age, gender, education, and country. Concordance, assessed by calculating the area under the receiver operating characteristic curve, was generally substantial (median 0.83; range 0.76-0.89; inter-quartile range 0.81-0.85). Based on this result, optimal scaling rules are presented for use by investigators working with the K6 scale in the countries studied. Copyright (c) 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the sensitivity of the composite cellular automaton of H. Fuks [Phys. Rev. E 55, R2081 (1997)] to noise and assess the density classification performance of the resulting probabilistic cellular automaton (PCA) numerically. We conclude that the composite PCA performs the density classification task reliably only up to very small levels of noise. In particular, it cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise. While the original composite CA is nonergodic, analyses of relaxation times indicate that its noisy version is an ergodic automaton, with the relaxation times decaying algebraically over an extended range of parameters with an exponent very close (possibly equal) to the mean-field value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stavskaya's model is a one-dimensional probabilistic cellular automaton (PCA) introduced in the end of the 1960s as an example of a model displaying a nonequilibrium phase transition. Although its absorbing state phase transition is well understood nowadays, the model never received a full numerical treatment to investigate its critical behavior. In this Brief Report we characterize the critical behavior of Stavskaya's PCA by means of Monte Carlo simulations and finite-size scaling analysis. The critical exponents of the model are calculated and indicate that its phase transition belongs to the directed percolation universality class of critical behavior, as would be expected on the basis of the directed percolation conjecture. We also explicitly establish the relationship of the model with the Domany-Kinzel PCA on its directed site percolation line, a connection that seems to have gone unnoticed in the literature so far.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spread of an infectious disease in a population involves interactions leading to an epidemic outbreak through a network of contacts. Extending on Watts and Strogatz (1998) who showed that short-distance connections create a small-world effect, a model combining short-and long-distance probabilistic and regularly updated contacts helps considering spatial heterogeneity. The method is based on cellular automata. The presence of long-distance connections accelerates the small-world effect, as if the world shrank in proportion of their total number.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyses of species-diversity patterns of remote islands have been crucial to the development of biogeographic theory, yet little is known about corresponding patterns in functional traits on islands and how, for example, they may be affected by the introduction of exotic species. We collated trait data for spiders and beetles and used a functional diversity index (FRic) to test for nonrandomness in the contribution of endemic, other native (also combined as indigenous), and exotic species to functional-trait space across the nine islands of the Azores. In general, for both taxa and for each distributional category, functional diversity increases with species richness, which, in turn scales with island area. Null simulations support the hypothesis that each distributional group contributes to functional diversity in proportion to their species richness. Exotic spiders have added novel trait space to a greater degree than have exotic beetles, likely indicating greater impact of the reduction of immigration filters and/or differential historical losses of indigenous species. Analyses of species occurring in native-forest remnants provide limited indications of the operation of habitat filtering of exotics for three islands, but only for beetles. Although the general linear (not saturating) pattern of trait-space increase with richness of exotics suggests an ongoing process of functional enrichment and accommodation, further work is urgently needed to determine how estimates of extinction debt of indigenous species should be adjusted in the light of these findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biologicals have been used for decades in biopharmaceutical topical preparations. Because cellular therapies are rou-tinely used in the clinic they have gained significant attention. Different derivatives are possible from different cell and tissue sources, making the selection of cell types and establishment of consistent cell banks crucial steps in the initial whole-cell bioprocessing. Various cell and tissue types have been used in treatment of skin wounds including autolo-gous and allogenic skin cells, platelets, placenta and amniotic extracts from either human or animal sources. Experience with progenitor cells show that they may provide an interesting cell choice due to facility of out-scaling and known properties for wound healing without scar. Using defined animal cell lines to develop cell-free derivatives may provide initial starting material for pharmaceutical formulations that help in overall stability. Cell lines derived from ovine tis-sue (skin, muscle, connective tissue) can be developed in short periods of time and consistency of these cell lines was monitored by cellular life-span, protein concentrations, stability and activity. Each cell line had long culture periods up to 37 - 41 passages and protein measures for each cell line at passages 2 - 15 had only 1.4-fold maximal difference. Growth stimulation activity towards two target skin cell lines (GM01717 and CRL-1221; 40 year old human males) at concentrations ranging up to 6 μg/ml showed 2-3-fold (single extracts) and 3-7-fold (co-cultured extracts) increase. Proteins from co-culture remained stable up to 1 year in pharmaceutical preparations shown by separation on SDS- PAGE gels. Pharmaceutical cell-free preparations were used for veterinary and human wounds and burns. Cell lines and cell-free extracts can show remarkable consistency and stability for preparation of biopharmaceutical creams, moreover when cells are co-cultured, and have positive effects for tissue repair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Morphogen gradients infer cell fate as a function of cellular position. Experiments in Drosophila embryos have shown that the Bicoid (Bcd) gradient is precise and exhibits some degree of scaling. We present experimental results on the precision of Bcd target genes for embryos with a single, double or quadruple dose of bicoid demonstrating that precision is highest at mid-embryo and position dependent, rather than gene dependent. This confirms that the major contribution to precision is achieved already at the Bcd gradient formation. Modeling this dynamic process, we investigate precision for inter-embryo fluctuations in different parameters affecting gradient formation. Within our modeling framework, the observed precision can only be achieved by a transient Bcd profile. Studying different extensions of our modeling framework reveals that scaling is generally position dependent and decreases toward the posterior pole. Our measurements confirm this trend, indicating almost perfect scaling except for anterior most expression domains, which overcompensate fluctuations in embryo length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.