889 resultados para Axiomatic Models of Resource Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. We present an analogue modeling study using cohesive powder with pre-formed joint sets in the upper layer, varying the angle between joints and a rigid basement fault. We analyze interpreted map-view photographs at maximum displacement for damage zone width, number of connected joints, number of secondary fractures, degree of segmentation and area fraction of massively dilatant fractures. Particle imaging velocimetry helps provide insights on deformation history of the experiments and illustrate the localization pattern of fault segments. Results show that with increasing angle between joint-set and basement-fault strike the number of secondary fractures and the number of connected joints increases, while the area fraction of massively dilatant fractures shows only a minor increase. Models without pre-existing joints show far lower area fractions of massively dilatant fractures while forming distinctly more secondary fractures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Federal Railway Administration, Office of Safety, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Federal Highway Administration, Office of Implementation, McLean, Va.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"OAEP-10."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In relation to motor control, the basal ganglia have been implicated in both the scaling and focusing of movement. Hypokinetic and hyperkinetic movement disorders manifest as a consequence of overshooting and undershooting GPi (globus pallidus internus) activity thresholds, respectively. Recently, models of motor control have been borrowed to translate cognitive processes relating to the overshooting and undershooting of GPi activity, including attention and executive function. Linguistic correlates, however, are yet to be extrapolated in sufficient detail. The aims of the present investigation were to: (1) characterise cognitive-linguistic processes within hypokinetic and hyperkinetic neural systems, as defined by motor disturbances; (2) investigate the impact of surgically-induced GPi lesions upon language abilities. Two Parkinsonian cases with opposing motor symptoms (akinetic versus dystonic/dyskinetic) served as experimental subjects in this research. Assessments were conducted both prior to as well as 3 and 12 months following bilateral posteroventral pallidotomy (PVP). Reliable changes in performance (i.e. both improvements and decrements) were typically restricted to tasks demanding complex linguistic operations across subjects. Hyperkinetic motor symptoms were associated with an initial overall improvement in complex language function as a consequence of bilateral PVP, which diminished over time, suggesting a decrescendo effect relative to surgical beneficence. In contrast, hypokinetic symptoms were associated with a more stable longitudinal linguistic profile, albeit defined by higher proportions of reliable decline versus improvement in postoperative assessment scores. The above findings endorsed the integration of the GPi within cognitive mechanisms involved in the arbitration of complex language functions. In relation to models of motor control, 'focusing' was postulated to represent the neural processes underpinning lexical-semantic manipulation, and 'scaling' the potential allocation of cognitive resources during the mediation of high-level linguistic tasks. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis of the equity premium puzzle has focused on private sector capital markets. The object of this paper is to consider the welfare and policy implications of each of the broad classes of explanations of the equity premium puzzle. As would be expected, the greater the deviation from the first-best outcome implied by a given explanation of the equity premium puzzle, the more interventionist are the implied policy conclusions. Nevertheless, even explanations of the equity premium puzzle consistent with a general consumption-based asset pricing model have important welfare and policy implications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resource allocation in sparsely connected networks, a representative problem of systems with real variables, is studied using the replica and Bethe approximation methods. An efficient distributed algorithm is devised on the basis of insights gained from the analysis and is examined using numerical simulations,showing excellent performance and full agreement with the theoretical results. The physical properties of the resource allocation model are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resource allocation is one of the major decision problems arising in higher education. Resources must be allocated optimally in such a way that the performance of universities can be improved. This paper applies an integrated multiple criteria decision making approach to the resource allocation problem. In the approach, the Analytic Hierarchy Process (AHP) is first used to determine the priority or relative importance of proposed projects with respect to the goals of the universities. Then, the Goal Programming (GP) model incorporating the constraints of AHP priority, system, and resource is formulated for selecting the best set of projects without exceeding the limited available resources. The projects include 'hardware' (tangible university's infrastructures), and 'software' (intangible effects that can be beneficial to the university, its members, and its students). In this paper, two commercial packages are used: Expert Choice for determining the AHP priority ranking of the projects, and LINDO for solving the GP model. Copyright © 2007 Inderscience Enterprises Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce self-interested evolutionary market agents, which act on behalf of service providers in a large decentralised system, to adaptively price their resources over time. Our agents competitively co-evolve in the live market, driving it towards the Bertrand equilibrium, the non-cooperative Nash equilibrium, at which all sellers charge their reserve price and share the market equally. We demonstrate that this outcome results in even load-balancing between the service providers. Our contribution in this paper is twofold; the use of on-line competitive co-evolution of self-interested service providers to drive a decentralised market towards equilibrium, and a demonstration that load-balancing behaviour emerges under the assumptions we describe. Unlike previous studies on this topic, all our agents are entirely self-interested; no cooperation is assumed. This makes our problem a non-trivial and more realistic one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pénzügyekben mind elméletileg, mind az alkalmazások szempontjából fontos kérdés a tőkeallokáció. Hogyan osszuk szét egy adott portfólió kockázatát annak alportfóliói között? Miként tartalékoljunk tőkét a fennálló kockázatok fedezetére, és a tartalékokat hogyan rendeljük az üzleti egységekhez? A tőkeallokáció vizsgálatára axiomatikus megközelítést alkalmazunk, tehát alapvető tulajdonságok megkövetelésével dolgozunk. Cikkünk kiindulópontja Csóka-Pintér [2010] azon eredménye, hogy a koherens kockázati mértékek axiómái, valamint a tőkeallokációra vonatkozó méltányossági, ösztönzési és stabilitási követelmények nincsenek összhangban egymással. Ebben a cikkben analitikus és szimulációs eszközökkel vizsgáljuk ezeket a követelményeket. A gyakorlati alkalmazások során használt, illetve az elméleti szempontból érdekes tőkeallokációs módszereket is elemezzük. A cikk fő következtetése, hogy a Csóka-Pintér [2010] által felvetett probléma gyakorlati szempontból is releváns, tehát az nemcsak az elméleti vizsgálatok során merül fel, hanem igen sokszor előforduló és gyakorlati probléma. A cikk további eredménye, hogy a vizsgált tőkeallokációs módszerek jellemzésével segítséget nyújt az alkalmazóknak a különböző módszerek közötti választáshoz. / === / Risk capital allocation in finance is important theoretically and also in practical applications. How can the risk of a portfolio be shared among its sub-portfolios? How should the capital reserves be set to cover risks, and how should the reserves be assigned to the business units? The study uses an axiomatic approach to analyse risk capital allocation, by working with requiring basic properties. The starting point is a 2010 study by Csoka and Pinter (2010), who showed that the axioms of coherent measures of risk are not compatible with some fairness, incentive compatibility and stability requirements of risk allocation. This paper discusses these requirements using analytical and simulation tools. It analyses methods used in practical applications that have theoretically interesting properties. The main conclusion is that the problems identified in Csoka and Pinter (2010) remain relevant in practical applications, so that it is not just a theoretical issue, it is a common practical problem. A further contribution is made because analysis of risk allocation methods helps practitioners choose among the different methods available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing interest in quantifying the cultural and creative industries, visualize the economic contribution of activities related to culture demands first of all the construction of internationally comparable analysis frameworks. Currently there are three major bodies which address this issue and whose comparative study is the focus of this article: the UNESCO Framework for Cultural Statistics (FCS-2009), the European Framework for Cultural Statistics (ESSnet-Culture 2012) and the methodological resource of the “Convenio Andrés Bello” group for working with the Satellite Accounts on Culture in Ibero-America (CAB-2015). Cultural sector measurements provide the information necessary for correct planning of cultural policies which in turn leads to sustaining industries and promoting cultural diversity. The text identifies the existing differences in the three models and three levels of analysis, the sectors, the cultural activities and the criteria that each one uses in order to determine the distribution of the activities by sector. The end result leaves the impossibility of comparing cultural statistics of countries that implement different frameworks.