990 resultados para policy simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently define a set of local policies on which routes it accepts and advertises from/to other networks, as well as on which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme(APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts, each AS dynamically adjusts its own path preferences---increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the sub-stability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado, Gestão da Água e da Costa, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This meta-analytic study sought to determine if cross-national curricula are aligned with burgeoning digital learning environments in order to help policy makers develop curriculum that incorporates 21st-century skills instruction. The study juxtaposed cross- national curricula in Ontario (Canada), Australia, and Finland against Jenkins’s (2009) framework of 11 crucial 21st-century skills that include: play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgment, transmedia navigation, networking, and negotiation. Results from qualitative data collection and analysis revealed that Finland implements all of Jenkins’s 21st-century skills. Recommendations are made to implement sound 21st-century skills in other jurisdictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cooperative caching in mobile ad hoc networks aims at improving the efficiency of information access by reducing access latency and bandwidth usage. Cache replacement policy plays a vital role in improving the performance of a cache in a mobile node since it has limited memory. In this paper we propose a new key based cache replacement policy called E-LRU for cooperative caching in ad hoc networks. The proposed scheme for replacement considers the time interval between the recent references, size and consistency as key factors for replacement. Simulation study shows that the proposed replacement policy can significantly improve the cache performance in terms of cache hit ratio and query delay

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many real world contexts individuals find themselves in situations where they have to decide between options of behaviour that serve a collective purpose or behaviours which satisfy one’s private interests, ignoring the collective. In some cases the underlying social dilemma (Dawes, 1980) is solved and we observe collective action (Olson, 1965). In others social mobilisation is unsuccessful. The central topic of social dilemma research is the identification and understanding of mechanisms which yield to the observed cooperation and therefore resolve the social dilemma. It is the purpose of this thesis to contribute this research field for the case of public good dilemmas. To do so, existing work that is relevant to this problem domain is reviewed and a set of mandatory requirements is derived which guide theory and method development of the thesis. In particular, the thesis focusses on dynamic processes of social mobilisation which can foster or inhibit collective action. The basic understanding is that success or failure of the required process of social mobilisation is determined by heterogeneous individual preferences of the members of a providing group, the social structure in which the acting individuals are contained, and the embedding of the individuals in economic, political, biophysical, or other external contexts. To account for these aspects and for the involved dynamics the methodical approach of the thesis is computer simulation, in particular agent-based modelling and simulation of social systems. Particularly conductive are agent models which ground the simulation of human behaviour in suitable psychological theories of action. The thesis develops the action theory HAPPenInGS (Heterogeneous Agents Providing Public Goods) and demonstrates its embedding into different agent-based simulations. The thesis substantiates the particular added value of the methodical approach: Starting out from a theory of individual behaviour, in simulations the emergence of collective patterns of behaviour becomes observable. In addition, the underlying collective dynamics may be scrutinised and assessed by scenario analysis. The results of such experiments reveal insights on processes of social mobilisation which go beyond classical empirical approaches and yield policy recommendations on promising intervention measures in particular.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Europe has responded to the crisis with strengthened budgetary and macroeconomic surveillance, the creation of the European Stability Mechanism, liquidity provisioning by resilient economies and the European Central Bank and a process towards a banking union. However, a monetary union requires some form of budget for fiscal stabilisation in case of shocks, and as a backstop to the banking union. This paper compares four quantitatively different schemes of fiscal stabilisation and proposes a new scheme based on GDP-indexed bonds. The options considered are: (i) A federal budget with unemployment and corporate taxes shifted to euro-area level; (ii) a support scheme based on deviations from potential output;(iii) an insurance scheme via which governments would issue bonds indexed to GDP, and (iv) a scheme in which access to jointly guaranteed borrowing is combined with gradual withdrawal of fiscal sovereignty. Our comparison is based on strong assumptions. We carry out a preliminary, limited simulation of how the debt-to-GDP ratio would have developed between 2008-14 under the four schemes for Greece, Ireland, Portugal, Spain and an ‘average’ country.The schemes have varying implications in each case for debt sustainability

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes in mature forest cover amount, composition, and configuration can be of significant consequence to wildlife populations. The response of wildlife to forest patterns is of concern to forest managers because it lies at the heart of such competing approaches to forest planning as aggregated vs. dispersed harvest block layouts. In this study, we developed a species assessment framework to evaluate the outcomes of forest management scenarios on biodiversity conservation objectives. Scenarios were assessed in the context of a broad range of forest structures and patterns that would be expected to occur under natural disturbance and succession processes. Spatial habitat models were used to predict the effects of varying degrees of mature forest cover amount, composition, and configuration on habitat occupancy for a set of 13 focal songbird species. We used a spatially explicit harvest scheduling program to model forest management options and simulate future forest conditions resulting from alternative forest management scenarios, and used a process-based fire-simulation model to simulate future forest conditions resulting from natural wildfire disturbance. Spatial pattern signatures were derived for both habitat occupancy and forest conditions, and these were placed in the context of the simulated range of natural variation. Strategic policy analyses were set in the context of current Ontario forest management policies. This included use of sequential time-restricted harvest blocks (created for Woodland caribou (Rangifer tarandus) conservation) and delayed harvest areas (created for American marten (Martes americana atrata) conservation). This approach increased the realism of the analysis, but reduced the generality of interpretations. We found that forest management options that create linear strips of old forest deviate the most from simulated natural patterns, and had the greatest negative effects on habitat occupancy, whereas policy options that specify deferment and timing of harvest for large blocks helped ensure the stable presence of an intact mature forest matrix over time. The management scenario that focused on maintaining compositional targets best supported biodiversity objectives by providing the composition patterns required by the 13 focal species, but this scenario may be improved by adding some broad-scale spatial objectives to better maintain large blocks of interior forest habitat through time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pressures of modern manufacturing require that the quality-cost benefits are determined when evaluating new procedures or alternative operating policies. Traditionally, cost reports and other quality metrics have been used for this purpose. However, the interactions between the main quality cost drivers cannot be understood at the superficial level and the effect that a new process or an alternative operating policy may have on quality costs is difficult to determine. An alternative to the traditional costing methods is simulation. The current work uses simulation to evaluate quality costs in an automotive stamping plant where the quality control is determined by operator inspection of their own work. Self-inspection quality control provides instantaneous feedback of quality problems, allowing for quick rectification. However, the difficult nature of surface finish inspection of automotive panels can create inspection and control errors. A simulation model was developed to investigate the cost effects of inspection and control errors and it was found that inspection error had a significant effect in increasing total quality cost, with the magnitude of this increase dependent on the level of control. Further, the simulation found that the lowest cost quality control policy was that which allowed a number of defective panels to accumulate before resetting the press-line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The next generation of wireless networks is envisioned as convergence of heterogeneous radio access networks. Since technologies are becoming more collaborative, a possible integration between IEEE 802.16 based network and previous generation of telecommunication systems (2G, ..., 3G) must be considered. A novel quality function based vertical handoff (VHO) algorithm, based on proposed velocity and average receive power estimation algorithms is discussed in this paper. The short-time Fourier analysis of received signal strength (RSS) is employed to obtain mobile speed and average received power estimates. Performance of quality function based VHO algorithm is evaluated by means of measure of quality of service (QoS). Simulation results show that proposed quality function, brings significant gains in QoS and more efficient use of resources can be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reinforcement learning agent has been developed to determine optimal operating policies in a multi-part serial line. The agent interacts with a discrete event simulation model of a stochastic production facility. This study identifies issues important to the simulation developer who wishes to optimise a complex simulation or develop a robust operating policy. Critical parameters pertinent to 'tuning' an agent quickly and enabling it to rapidly learn the system were investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an enterprise grid computing environments, users have access to multiple resources that may be distributed geographically. Thus, resource allocation and scheduling is a fundamental issue in achieving high performance on enterprise grid computing. Most of current job scheduling systems for enterprise grid computing provide batch queuing support and focused solely on the allocation of processors to jobs. However, since I/O is also a critical resource for many jobs, the allocation of processor and I/O resources must be coordinated to allow the system to operate most effectively. To this end, we present a hierarchical scheduling policy paying special attention to I/O and service-demands of parallel jobs in homogeneous and heterogeneous systems with background workload. The performance of the proposed scheduling policy is studied under various system and workload parameters through simulation. We also compare performance of the proposed policy with a static space–time sharing policy. The results show that the proposed policy performs substantially better than the static space–time sharing policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of commodity-based high-performance clusters has raised parallel and distributed computing to a new level. However, in order to achieve the best possible performance improvements for large-scale computing problems as well as good resource utilization, efficient resource management and scheduling is required. This paper proposes a new two-level adaptive space-sharing scheduling policy for non-dedicated heterogeneous commodity-based high-performance clusters. Using trace-driven simulation, the performance of the proposed scheduling policy is compared with existing adaptive space-sharing policies. Results of the simulation show that the proposed policy performs substantially better than the existing policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: To compare the likely costs and benefits of a range of potential policy interventions in Fiji and Tonga targeted at diet-related noncommunicable diseases (NCDs), in order to support more evidence-based decision-making.

Method: A relatively simple and quick macro-simulation methodology was developed. Logic models were developed by local stakeholders and used to identify costs and dietary impacts of policy changes. Costs were confined to government costs, and excluded cost offsets. The best available evidence was combined with local data to model impacts on deaths from noncommunicable diseases over the lifetime of the target population. Given that the modelling necessarily entailed assumptions to compensate for gaps in data and evidence, use was made of probabilistic uncertainty analysis.

Results:
Costs of implementing policy changes were generally low, with the exception of some requiring additional long-term staffing or construction activities. The most effective policy options in Fiji and Tonga targeted access to local produce and high-fat meats respectively, and were estimated to avert approximately 3% of diet-related NCD deaths in each population. Many policies had substantially lower benefits. Cost-effectiveness was higher for the low-cost policies. Similar policies produced markedly different results in the two countries.

Conclusion:
Despite the crudeness of the method, the consistent modelling approach used across all the options, allowed reasonable comparisons to be made between the potential policy costs and impacts. This type of modelling can be used to support more evidence-based and informed decision-making about policy interventions and facilitate greater use of policy to achieve a reduction in NCDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wild et al present an original cost effectiveness analysis for medical surveillance for isocyanate asthma in this issue of OEM.1 The general case for surveillance for isocyanate asthma is a compelling one. Most occupational physicians, practitioners, and researchers might rightly expect that if a cost effectiveness (CE) case cannot be made for this agent, it would be hard to make a case for most others. The causal link between isocyanate exposure and asthma is well established, and more is known about the pathophysiology, natural history, long term consequences, and benefits of medical surveillance in this instance than for most other occupational exposures.A mathematical simulation model was developed based on a carefully specified set of clinical parameters, drawing from empirical studies where possible (for example, in estimating sensitisation rates ranging from 0.7% to 5.3% per year), and well qualified expert opinion otherwise (for example, in estimating the chance of removal from exposure if a patient is diagnosed versus undiagnosed). Their “state transition” model compared passive case finding to surveillance (the heart of the CE analysis question as proposed) for a theoretical population of 100 000 otherwise healthy and exposed workers, predicting their progression over 10 years across three mutually exclusive “states”: healthy and exposed; symptomatic; and disabled. This alone is an impressive and valuable piece of research, integrating a substantial body of empirical research to show that surveillance is estimated to result in 700 fewer cases of disability over 10 years compared to passive case finding. While such a modelling exercise necessarily requires numerous assumptions and simplifications, each was well articulated and defensible.