877 resultados para Axiomatic Models of Resource Allocation
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
One of the most important lessons learned during the 2008-09 financial crisis was that the informational toolbox on which policymakers base their decisions about competitiveness became outdated in terms of both data sources and data analysis. The toolbox is particularly outdated when it comes to tapping the potential of micro data for the analysis of competitiveness – a serious problem given that it is firms, rather than countries that compete on global markets.
Resumo:
In this paper a utilization of the high data-rates channels by threading of sending and receiving is studied. As a communication technology evolves the higher speeds are used more and more in various applications. But generating traffic with Gbps data-rates also brings some complications. Especially if UDP protocol is used and it is necessary to avoid packet fragmentation, for example for high-speed reliable transport protocols based on UDP. For such situation the Ethernet network packet size has to correspond to standard 1500 bytes MTU[1], which is widely used in the Internet. System may not has enough capacity to send messages with necessary rate in a single-threaded mode. A possible solution is to use more threads. It can be efficient on widespread multicore systems. Also the fact that in real network non-constant data flow can be expected brings another object of study –- an automatic adaptation to the traffic which is changing during runtime. Cases investigated in this paper include adjusting number of threads to a given speed and keeping speed on a given rate when CPU gets heavily loaded by other processes while sending data.
Resumo:
Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. We present an analogue modeling study using cohesive powder with pre-formed joint sets in the upper layer, varying the angle between joints and a rigid basement fault. We analyze interpreted map-view photographs at maximum displacement for damage zone width, number of connected joints, number of secondary fractures, degree of segmentation and area fraction of massively dilatant fractures. Particle imaging velocimetry helps provide insights on deformation history of the experiments and illustrate the localization pattern of fault segments. Results show that with increasing angle between joint-set and basement-fault strike the number of secondary fractures and the number of connected joints increases, while the area fraction of massively dilatant fractures shows only a minor increase. Models without pre-existing joints show far lower area fractions of massively dilatant fractures while forming distinctly more secondary fractures.
Resumo:
Federal Railway Administration, Office of Safety, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Implementation, McLean, Va.
Resumo:
"OAEP-10."
Resumo:
In relation to motor control, the basal ganglia have been implicated in both the scaling and focusing of movement. Hypokinetic and hyperkinetic movement disorders manifest as a consequence of overshooting and undershooting GPi (globus pallidus internus) activity thresholds, respectively. Recently, models of motor control have been borrowed to translate cognitive processes relating to the overshooting and undershooting of GPi activity, including attention and executive function. Linguistic correlates, however, are yet to be extrapolated in sufficient detail. The aims of the present investigation were to: (1) characterise cognitive-linguistic processes within hypokinetic and hyperkinetic neural systems, as defined by motor disturbances; (2) investigate the impact of surgically-induced GPi lesions upon language abilities. Two Parkinsonian cases with opposing motor symptoms (akinetic versus dystonic/dyskinetic) served as experimental subjects in this research. Assessments were conducted both prior to as well as 3 and 12 months following bilateral posteroventral pallidotomy (PVP). Reliable changes in performance (i.e. both improvements and decrements) were typically restricted to tasks demanding complex linguistic operations across subjects. Hyperkinetic motor symptoms were associated with an initial overall improvement in complex language function as a consequence of bilateral PVP, which diminished over time, suggesting a decrescendo effect relative to surgical beneficence. In contrast, hypokinetic symptoms were associated with a more stable longitudinal linguistic profile, albeit defined by higher proportions of reliable decline versus improvement in postoperative assessment scores. The above findings endorsed the integration of the GPi within cognitive mechanisms involved in the arbitration of complex language functions. In relation to models of motor control, 'focusing' was postulated to represent the neural processes underpinning lexical-semantic manipulation, and 'scaling' the potential allocation of cognitive resources during the mediation of high-level linguistic tasks. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Analysis of the equity premium puzzle has focused on private sector capital markets. The object of this paper is to consider the welfare and policy implications of each of the broad classes of explanations of the equity premium puzzle. As would be expected, the greater the deviation from the first-best outcome implied by a given explanation of the equity premium puzzle, the more interventionist are the implied policy conclusions. Nevertheless, even explanations of the equity premium puzzle consistent with a general consumption-based asset pricing model have important welfare and policy implications.
Resumo:
Resource allocation in sparsely connected networks, a representative problem of systems with real variables, is studied using the replica and Bethe approximation methods. An efficient distributed algorithm is devised on the basis of insights gained from the analysis and is examined using numerical simulations,showing excellent performance and full agreement with the theoretical results. The physical properties of the resource allocation model are discussed.
An integrated multiple criteria decision making approach for resource allocation in higher education
Resumo:
Resource allocation is one of the major decision problems arising in higher education. Resources must be allocated optimally in such a way that the performance of universities can be improved. This paper applies an integrated multiple criteria decision making approach to the resource allocation problem. In the approach, the Analytic Hierarchy Process (AHP) is first used to determine the priority or relative importance of proposed projects with respect to the goals of the universities. Then, the Goal Programming (GP) model incorporating the constraints of AHP priority, system, and resource is formulated for selecting the best set of projects without exceeding the limited available resources. The projects include 'hardware' (tangible university's infrastructures), and 'software' (intangible effects that can be beneficial to the university, its members, and its students). In this paper, two commercial packages are used: Expert Choice for determining the AHP priority ranking of the projects, and LINDO for solving the GP model. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
We introduce self-interested evolutionary market agents, which act on behalf of service providers in a large decentralised system, to adaptively price their resources over time. Our agents competitively co-evolve in the live market, driving it towards the Bertrand equilibrium, the non-cooperative Nash equilibrium, at which all sellers charge their reserve price and share the market equally. We demonstrate that this outcome results in even load-balancing between the service providers. Our contribution in this paper is twofold; the use of on-line competitive co-evolution of self-interested service providers to drive a decentralised market towards equilibrium, and a demonstration that load-balancing behaviour emerges under the assumptions we describe. Unlike previous studies on this topic, all our agents are entirely self-interested; no cooperation is assumed. This makes our problem a non-trivial and more realistic one.