927 resultados para Sum rules


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"This volume contains the proceedings of a meeting held at Montpellier from December 1st to December 5th 1986 .sponsored by the Centre national de la recherche scientifique ."--Preface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rule testing in transport scheduling is a complex and potentially costly business problem. This paper proposes an automated method for the rule-based testing of business rules using the extensible Markup Language for rule representation and transportation. A compiled approach to rule execution is also proposed for performance-critical scheduling systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To predict where a catalytic reaction should occur is a fundamental issue scientifically. Technologically, it is also important because it can facilitate the catalyst's design. However, to date, the understanding of this issue is rather limited. In this work, two types of reactions, CH4 CH3 + H and CO C + 0 on two transition metal surfaces, were chosen as model systems aiming to address in general where a catalytic reaction should occur. The dissociations of CH4 - CH3 + H and CO --> C + O and their reverse reactions on flat, stepped, and kinked Rh and Pd surfaces were studied in detail. We find the following: First, for the CH4 Ch(3) + H reaction, the dissociation barrier is reduced by similar to0.3 eV on steps and kinks as compared to that on flat surfaces. On the other hand, there is essentially no difference in barrier for the association reaction of CH3 + H on the flat surfaces and the defects. Second, for the CO C + 0 reaction, the dissociation barrier decreases dramatically (more than 0.8 eV on Rh and Pd) on steps and kinks as compared to that on flat surfaces. In contrast to the CH3 + H reaction, the C + 0 association reaction also preferentially occurs on steps and kinks. We also present a detailed analysis of the reaction barriers in which each barrier is decomposed quantitatively into a local electronic effect and a geometrical effect. Our DFT calculations show that surface defects such as steps and kinks can largely facilitate bond breaking, while whether the surface defects could promote bond formation depends on the individual reaction as well as the particular metal. The physical origin of these trends is identified and discussed. On the basis of our results, we arrive at some simple rules with respect to where a reaction should occur: (i) defects such as steps are always favored for dissociation reactions as compared to flat surfaces; and (ii) the reaction site of the association reactions is largely related to the magnitude of the bonding competition effect, which is determined by the reactant and metal valency. Reactions with high valency reactants are more likely to occur on defects (more structure-sensitive), as compared to reactions with low valency reactants. Moreover, the reactions on late transition metals are more likely to proceed on defects than those on the early transition metals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural landscape boundaries between vegetation communities are dynamically influenced by the selective grazing of herbivores. Here we show how this may be an emergent property of very simple animal decisions, without the need for any sophisticated choice rules etc., using a model based on biased diffusion. Animal grazing intensity is coupled with plant competition, resulting in reaction-diffusion dynamics, from which stable boundaries spontaneously emerge. In the model, animals affect their resources by both consumption and trampling. It is assumed that forage consists of two heterogeneously distributed competing resource species, one that is preferred (grass) over the other (heather) by the animals. The solutions to the resulting system of differential equations for three cases a) optimal foraging, b) random walk foraging and c) taxis-diffusion are presented. Optimal and random foraging gave unrealistic results, but taxis-diffusion accorded well with field observations. Persistent boundaries between patches of near-monoculture vegetation were predicted, with these boundaries drifting in response to overall grazing pressure (grass advancing with increased grazing and vice versa). The reaction-taxis-diffusion model provides the first mathematical explanation for such vegetation mosaic dynamics and the parameters of the model are open to experimental testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.