883 resultados para balanced-budget rules
Resumo:
How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? In response to this question we propose a framework for viewing issues of knowledge indexing and retrieval, a framework that includes what appears to be a useful perspective on the concept of a strategy. We view strategies as a means of controlling invocation in situations where traditional selection mechanisms become ineffective. We examine ways to effect such control, and describe meta-rules, a means of specifying strategies which offers a number of advantages. We consider at some length how and when it is useful to reason about control, and explore the advantages meta-rules offer for doing this.
Resumo:
The rules which epitomise good writing may on occasions be broken, deliberately and with what the writers judge to be good purpose. This can well occur when students or staff set out to engage effectively with their personal and professional development, through personal reflection on and in experiences. They may do this in what has been called “stream of consciousness” writing, which is deliberately compiled in a manner at variance with the general rules for best practice. The rationale for such an unusual decision, namely to engage in what is frankly disorderly writing, is set out briefly in this chapter. Its characteristics are summarised, in implicit contrast with more conventional styles of writing. Examples are included of claims for the effectiveness of this style when used for developmental purposes by students and staff; and reference is made to the publications of some of those who have endorsed this approach.
Resumo:
M. Galea and Q. Shen. Fuzzy rules from ant-inspired computation. Proceedings of the 13th International Conference on Fuzzy Systems, pages 1691-1696, 2004.
Resumo:
M. Galea and Q. Shen. FRANTIC - A system for inducing accurate and comprehensible fuzzy rules. Proceedings of the 2004 UK Workshop on Computational Intelligence, pages 136-143.
Resumo:
M. Galea and Q. Shen. Linguistic hedges for ant-generated rules. Proceedings of the 15th International Conference on Fuzzy Systems, pages 9105-9112, 2006.
Resumo:
M. Galea, Q. Shen and V. Singh. Encouraging Complementary Fuzzy Rules within Iterative Rule Learning. Proceedings of the 2005 UK Workshop on Computational Intelligence, pages 15-22.
Resumo:
M. Galea and Q. Shen. Simultaneous ant colony optimisation algorithms for learning linguistic fuzzy rules. A. Abraham, C. Grosan and V. Ramos (Eds.), Swarm Intelligence in Data Mining, pages 75-99.
Resumo:
null RAE2008
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
Smoking is an expensive habit. Smoking households spend, on average, more than $US1000 annually on cigarettes. When a family member quits, in addition to the former smoker's improved long-term health, families benefit because savings from reduced cigarette expenditures can be allocated to other goods. For households in which some members continue to smoke, smoking expenditures crowd-out other purchases, which may affect other household members, as well as the smoker. We empirically analyse how expenditures on tobacco crowd-out consumption of other goods, estimating the patterns of substitution and complementarity between tobacco products and other categories of household expenditure. We use the Consumer Expenditure Survey data for the years 1995-2001, which we complement with regional price data and state cigarette prices. We estimate a consumer demand system that includes several main expenditure categories (cigarettes, food, alcohol, housing, apparel, transportation, medical care) and controls for socioeconomic variables and other sources of observable heterogeneity. Descriptive data indicate that, comparing smokers to nonsmokers, smokers spend less on housing. Results from the demand system indicate that as the price of cigarettes rises, households increase the quantity of food purchased, and, in some samples, reduce the quantity of apparel and housing purchased.
Resumo:
Successfully predicting the frequency dispersion of electronic hyperpolarizabilities is an unresolved challenge in materials science and electronic structure theory. We show that the generalized Thomas-Kuhn sum rules, combined with linear absorption data and measured hyperpolarizability at one or two frequencies, may be used to predict the entire frequency-dependent electronic hyperpolarizability spectrum. This treatment includes two- and three-level contributions that arise from the lowest two or three excited electronic state manifolds, enabling us to describe the unusual observed frequency dispersion of the dynamic hyperpolarizability in high oscillator strength M-PZn chromophores, where (porphinato)zinc(II) (PZn) and metal(II)polypyridyl (M) units are connected via an ethyne unit that aligns the high oscillator strength transition dipoles of these components in a head-to-tail arrangement. We show that some of these structures can possess very similar linear absorption spectra yet manifest dramatically different frequency dependent hyperpolarizabilities, because of three-level contributions that result from excited state-to excited state transition dipoles among charge polarized states. Importantly, this approach provides a quantitative scheme to use linear optical absorption spectra and very limited individual hyperpolarizability measurements to predict the entire frequency-dependent nonlinear optical response. Copyright © 2010 American Chemical Society.
Resumo:
This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.
Resumo:
The main conclusion of this dissertation is that global H2 production within young ocean crust (<10 Mya) is higher than currently recognized, in part because current estimates of H2 production accompanying the serpentinization of peridotite may be too low (Chapter 2) and in part because a number of abiogenic H2-producing processes have heretofore gone unquantified (Chapter 3). The importance of free H2 to a range of geochemical processes makes the quantitative understanding of H2 production advanced in this dissertation pertinent to an array of open research questions across the geosciences (e.g. the origin and evolution of life and the oxidation of the Earth’s atmosphere and oceans).
The first component of this dissertation (Chapter 2) examines H2 produced within young ocean crust [e.g. near the mid-ocean ridge (MOR)] by serpentinization. In the presence of water, olivine-rich rocks (peridotites) undergo serpentinization (hydration) at temperatures of up to ~500°C but only produce H2 at temperatures up to ~350°C. A simple analytical model is presented that mechanistically ties the process to seafloor spreading and explicitly accounts for the importance of temperature in H2 formation. The model suggests that H2 production increases with the rate of seafloor spreading and the net thickness of serpentinized peridotite (S-P) in a column of lithosphere. The model is applied globally to the MOR using conservative estimates for the net thickness of lithospheric S-P, our least certain model input. Despite the large uncertainties surrounding the amount of serpentinized peridotite within oceanic crust, conservative model parameters suggest a magnitude of H2 production (~1012 moles H2/y) that is larger than the most widely cited previous estimates (~1011 although previous estimates range from 1010-1012 moles H2/y). Certain model relationships are also consistent with what has been established through field studies, for example that the highest H2 fluxes (moles H2/km2 seafloor) are produced near slower-spreading ridges (<20 mm/y). Other modeled relationships are new and represent testable predictions. Principal among these is that about half of the H2 produced globally is produced off-axis beneath faster-spreading seafloor (>20 mm/y), a region where only one measurement of H2 has been made thus far and is ripe for future investigation.
In the second part of this dissertation (Chapter 3), I construct the first budget for free H2 in young ocean crust that quantifies and compares all currently recognized H2 sources and H2 sinks. First global estimates of budget components are proposed in instances where previous estimate(s) could not be located provided that the literature on that specific budget component was not too sparse to do so. Results suggest that the nine known H2 sources, listed in order of quantitative importance, are: Crystallization (6x1012 moles H2/y or 61% of total H2 production), serpentinization (2x1012 moles H2/y or 21%), magmatic degassing (7x1011 moles H2/y or 7%), lava-seawater interaction (5x1011 moles H2/y or 5%), low-temperature alteration of basalt (5x1011 moles H2/y or 5%), high-temperature alteration of basalt (3x1010 moles H2/y or <1%), catalysis (3x108 moles H2/y or <<1%), radiolysis (2x108 moles H2/y or <<1%), and pyrite formation (3x106 moles H2/y or <<1%). Next we consider two well-known H2 sinks, H2 lost to the ocean and H2 occluded within rock minerals, and our analysis suggests that both are of similar size (both are 6x1011 moles H2/y). Budgeting results suggest a large difference between H2 sources (total production = 1x1013 moles H2/y) and H2 sinks (total losses = 1x1011 moles H2/y). Assuming this large difference represents H2 consumed by microbes (total consumption = 9x1011 moles H2/y), we explore rates of primary production by the chemosynthetic, sub-seafloor biosphere. Although the numbers presented require further examination and future modifications, the analysis suggests that the sub-seafloor H2 budget is similar to the sub-seafloor CH4 budget in the sense that globally significant quantities of both of these reduced gases are produced beneath the seafloor but never escape the seafloor due to microbial consumption.
The third and final component of this dissertation (Chapter 4) explores the self-organization of barchan sand dune fields. In nature, barchan dunes typically exist as members of larger dune fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides (“calving”); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.
Resumo:
"This volume contains the proceedings of a meeting held at Montpellier from December 1st to December 5th 1986 .sponsored by the Centre national de la recherche scientifique ."--Preface.