989 resultados para abstract Markov policies


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov模型假设的基础上 ,利用长白山自然保护区 1975年 MSS、1985年和 1997年 TM卫星遥感数据 ,在遥感图象处理软件和 GIS软件协助下 ,对遥感影像的计算机监督分类结果 (共分为 13类 )进行处理 ,对 Markov模型的可利用性进行分析与检验 ,得出长白山自然保护区景观变化无后效性 ,符合 Markov模型条件。根据 1985~ 1997年转移概率计算步长 10 a(1985~ 1995年 )的转移概率矩阵 ,从 1975年计算 1985年各景观类型的面积与 1985年各景观类型的实际面积值对比 ,计算得 χ2 >χ20 .0 5(12 ) ;再分别用 1975~ 1985年和 1985~ 1997年的转移矩阵计算 1995年和 2 0 4 7年各景观类型的面积 ,分析得χ2 >χ20 .0 5(12 ) ;对两阶段的转移概率矩阵分析得到 χ2 >χ20 .0 5(14 4 ) ;说明两阶段的 Markov转移过程不具同一性 ,属于两个不同的 Markov过程。不同景观类型转移方式对χ2 值的贡献率可以说明其对景观动态的重要性 ,分析结果表明有重要贡献的类型分别为 :阔叶红松林 5 2 .0 0 % >山杨白桦林 2 4 .6 6 % >云冷杉林 11.4 2 % >落叶松林 2 .4 3% ,说明这 4种景观类型的转移方式对长白山自然保护区的景观动态起重要作用 ,尤其以阔叶松林的作用最大 ;同时对 Markov模型在长白山自然保护区长期景观变化预测的可

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compliant control is a standard method for performing fine manipulation tasks, like grasping and assembly, but it requires estimation of the state of contact between the robot arm and the objects involved. Here we present a method to learn a model of the movement from measured data. The method requires little or no prior knowledge and the resulting model explicitly estimates the state of contact. The current state of contact is viewed as the hidden state variable of a discrete HMM. The control dependent transition probabilities between states are modeled as parametrized functions of the measurement We show that their parameters can be estimated from measurements concurrently with the estimation of the parameters of the movement in each state of contact. The learning algorithm is a variant of the EM procedure. The E step is computed exactly; solving the M step exactly would require solving a set of coupled nonlinear algebraic equations in the parameters. Instead, gradient ascent is used to produce an increase in likelihood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel algorithm for learning in a class of stochastic Markov decision processes (MDPs) with continuous state and action spaces that trades speed for accuracy. A transform of the stochastic MDP into a deterministic one is presented which captures the essence of the original dynamics, in a sense made precise. In this transformed MDP, the calculation of values is greatly simplified. The online algorithm estimates the model of the transformed MDP and simultaneously does policy search against it. Bounds on the error of this approximation are proven, and experimental results in a bicycle riding domain are presented. The algorithm learns near optimal policies in orders of magnitude fewer interactions with the stochastic MDP, using less domain knowledge. All code used in the experiments is available on the project's web site.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report studies when and why two Hidden Markov Models (HMMs) may represent the same stochastic process. HMMs are characterized in terms of equivalence classes whose elements represent identical stochastic processes. This characterization yields polynomial time algorithms to detect equivalent HMMs. We also find fast algorithms to reduce HMMs to essentially unique and minimal canonical representations. The reduction to a canonical form leads to the definition of 'Generalized Markov Models' which are essentially HMMs without the positivity constraint on their parameters. We discuss how this generalization can yield more parsimonious representations of stochastic processes at the cost of the probabilistic interpretation of the model parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

F. Smith and Q. Shen. Fault identification through the combination of symbolic conflict recognition and Markov Chain-aided belief revision. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 34(5):649-663, 2004.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Journal of Energy and Natural Resources Law, 24(4) pp.574-606 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Griffiths, L.; and O'Malley, T. (2007). Media Literacy in Wales: a Critical Review of Industry and Education Policies. Cyfrwng. 4, pp.7-23. RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For many librarians, institutional repositories (IRs) promised significant change for academic libraries. We envisioned enlarging collection development scope to include locally produced scholarship and an expansion of library services to embrace scholarly publication and distribution. However, at the University of Rochester, as at many other institutions, this transformational technology was introduced in the conservative, controlled manner associated with stereotypical librarian culture, and so these expected changes never materialized. In this case study, we focus on the creation of our institutional repository (a potentially disruptive technology) and how its success was hampered by our organizational culture, manifested as a lengthy and complicated set of policies. In the following pages, we briefly describe our repository project, talk about our original policies, look at the ways those policies impeded our project, and discuss the disruption of those policies and the benefits in user uptake that resulted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ML programming language restricts type polymorphism to occur only in the "let-in" construct and requires every occurrence of a formal parameter of a function (a lambda abstraction) to have the same type. Milner in 1978 refers to this restriction (which was adopted to help ML achieve automatic type inference) as a serious limitation. We show that this restriction can be relaxed enough to allow universal polymorphic abstraction without losing automatic type inference. This extension is equivalent to the rank-2 fragment of system F. We precisely characterize the additional program phrases (lambda terms) that can be typed with this extension and we describe typing anomalies both before and after the extension. We discuss how macros may be used to gain some of the power of rank-3 types without losing automatic type inference. We also discuss user-interface problems in how to inform the programmer of the possible types a program phrase may have.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interdomain routing on the Internet is performed using route preference policies specified independently, and arbitrarily by each Autonomous System in the network. These policies are used in the border gateway protocol (BGP) by each AS when selecting next-hop choices for routes to each destination. Conflicts between policies used by different ASs can lead to routing instabilities that, potentially, cannot be resolved no matter how long BGP is run. The Stable Paths Problem (SPP) is an abstract graph theoretic model of the problem of selecting nexthop routes for a destination. A stable solution to the problem is a set of next-hop choices, one for each AS, that is compatible with the policies of each AS. In a stable solution each AS has selected its best next-hop given that the next-hop choices of all neighbors are fixed. BGP can be viewed as a distributed algorithm for solving SPP. In this report we consider the stable paths problem, as well as a family of restricted variants of the stable paths problem, which we call F stable paths problems. We show that two very simple variants of the stable paths problem are also NP-complete. In addition we show that for networks with a DAG topology, there is an efficient centralized algorithm to solve the stable paths problem, and that BGP always efficiently converges to a stable solution on such networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transport protocols are an integral part of the inter-process communication (IPC) service used by application processes to communicate over the network infrastructure. With almost 30 years of research on transport, one would have hoped that we have a good handle on the problem. Unfortunately, that is not true. As the Internet continues to grow, new network technologies and new applications continue to emerge putting transport protocols in a never-ending flux as they are continuously adapted for these new environments. In this work, we propose a clean-slate transport architecture that renders all possible transport solutions as simply combinations of policies instantiated on a single common structure. We identify a minimal set of mechanisms that once instantiated with the appropriate policies allows any transport solution to be realized. Given our proposed architecture, we contend that there are no more transport protocols to design—only policies to specify. We implement our transport architecture in a declarative language, Network Datalog (NDlog), making the specification of different transport policies easy, compact, reusable, dynamically configurable and potentially verifiable. In NDlog, transport state is represented as database relations, state is updated/queried using database operations, and transport policies are specified using declarative rules. We identify limitations with NDlog that could potentially threaten the correctness of our specification. We propose several language extensions to NDlog that would significantly improve the programmability of transport policies.