60 resultados para Congestion avoidance
Resumo:
High Performance Computing is a rapidly evolving area of computer science which attends to solve complicated computational problems with the combination of computational nodes connected through high speed networks. This work concentrates on the networks problems that appear in such networks and specially focuses on the Deadlock problem that can decrease the efficiency of the communication or even destroy the balance and paralyze the network. Goal of this work is the Deadlock avoidance with the use of virtual channels, in the switches of the network where the problem appears. The deadlock avoidance assures that will not be loss of data inside network, having as result the increased latency of the served packets, due to the extra calculation that the switches have to make to apply the policy.
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
This paper provides a theoretical and empirical analysis of the relationship between airport congestion and airline network structure. We find that the development of hub-and-spoke (HS) networks may have detrimental effects on social welfare in presence of airport congestion. The theoretical analysis shows that, although airline pro ts are typically higher under HS networks, congestion could create incentives for airlines to adopt fully-connected (FC) networks. However, the welfare analysis leads to the conclusion that airlines may have an inefficient bias towards HS networks. In line with the theoretical analysis, our empirical results show that network airlines are weakly infl uenced by congestion in their choice of frequencies from/to their hub airports. Consistently with this result, we con firm that delays are higher in hub airports controlling for concentration and airport size. Keywords: airlines; airport congestion; fully-connected networks, hub-and-spoke net- works; network efficiency JEL Classifi cation Numbers: L13; L2; L93
Resumo:
This paper investigates the link between brand performance and cultural primes in high-risk,innovation-based sectors. In theory section, we propose that the level of cultural uncertaintyavoidance embedded in a firm determine its marketing creativity by increasing the complexityand the broadness of a brand. It determines also the rate of firm product innovations.Marketing creativity and product innovation influence finally the firm marketingperformance. Empirically, we study trademarked promotion in the Software Security Industry(SSI). Our sample consists of 87 firms that are active in SSI from 11 countries in the period1993-2000. We use the data coming from SSI-related trademarks registered by these firms,ending up with 2,911 SSI-related trademarks and a panel of 18,213 observations. We estimatea two stage model in which first we predict the complexity and the broadness of a trademarkas a measure of marketing creativity and the rate of product innovations. Among severalcontrol variables, our variable of theoretical interest is the Hofstede s uncertainty avoidancecultural index. Then, we estimate the trademark duration with a hazard model using thepredicted complexity and broadness as well as the rate of product innovations, along with thesame control variables. Our evidence confirms that the cultural avoidance affects the durationof the trademarks through the firm marketing creativity and product innovation.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
We propose a model and solution methods, for locating a fixed number ofmultiple-server, congestible common service centers or congestible publicfacilities. Locations are chosen so to minimize consumers congestion (orqueuing) and travel costs, considering that all the demand must be served.Customers choose the facilities to which they travel in order to receiveservice at minimum travel and congestion cost. As a proxy for thiscriterion, total travel and waiting costs are minimized. The travel costis a general function of the origin and destination of the demand, whilethe congestion cost is a general function of the number of customers inqueue at the facilities.
Resumo:
The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
Resumo:
The -function and the -function are phenomenological models that are widely used in the context of timing interceptive actions and collision avoidance, respectively. Both models were previously considered to be unrelated to each other: is a decreasing function that provides an estimation of time-to-contact (ttc) in the early phase of an object approach; in contrast, has a maximum before ttc. Furthermore, it is not clear how both functions could be implemented at the neuronal level in a biophysically plausible fashion. Here we propose a new framework the corrected modified Tau function capable of predicting both -type ("") and -type ("") responses. The outstanding property of our new framework is its resilience to noise. We show that can be derived from a firing rate equation, and, as , serves to describe the response curves of collision sensitive neurons. Furthermore, we show that predicts the psychophysical performance of subjects determining ttc. Our new framework is thus validated successfully against published and novel experimental data. Within the framework, links between -type and -type neurons are established. Therefore, it could possibly serve as a model for explaining the co-occurrence of such neurons in the brain.
Resumo:
The -function and the -function are phenomenological models that are widely used in the context of timing interceptive actions and collision avoidance, respectively. Both models were previously considered to be unrelated to each other: is a decreasing function that provides an estimation of time-to-contact (ttc) in the early phase of an object approach; in contrast, has a maximum before ttc. Furthermore, it is not clear how both functions could be implemented at the neuronal level in a biophysically plausible fashion. Here we propose a new framework- the corrected modified Tau function- capable of predicting both -type ("") and -type ("") responses. The outstanding property of our new framework is its resilience to noise. We show that can be derived from a firing rate equation, and, as , serves to describe the response curves of collision sensitive neurons. Furthermore, we show that predicts the psychophysical performance of subjects determining ttc. Our new framework is thus validated successfully against published and novel experimental data. Within the framework, links between -type and -type neurons are established. Therefore, it could possibly serve as a model for explaining the co-occurrence of such neurons in the brain.
Resumo:
Congestion costs are emerging as one of the most important challenges faced by metropolitan planners and transport authorities in first world economies. In US these costs were as high as 78 million dollars in 2005 and are growing due to fast increases in travel delays. In order to solve the current and severe levels of congestion the US department of transportation have recently started a program to initiate congestion pricing in five metropolitan areas. In this context it is important to determine those factors helping its implementation and success, but also the problems or difficulties associated with charging projects. In this article we analyze worldwide experiences with urban road charging in order to extract interesting and helpful lessons for policy makers engaged in congestion pricing projects and for those interested in the introduction of traffic management tools to regulate the entrance to big cities.
Resumo:
In this paper we consider a model of cooperative production in which rational agents have the possibility to engage in sabotage activities that decrease output. It is shown that sabotage depends on the interplay between the degree of congestion, the technology of sabotage, the number of agents the degree of meritocracy and the form of the sharing rule. In particular it is shown that, ceteries paribus, meritocratic systems give more incentives to sabotage than egalitarian systems. We address two questions: The degree of meritocracy that is compatible with absence of sabotage and the existence of a Nash equilibrium with and without sabotage.
Resumo:
We consider the following allocation problem: A fixed number of public facilities must be located on a line. Society is composed of $N$ agents, who must be allocated to one and only one of these facilities. Agents have single peaked preferences over the possible location of the facilities they are assigned to, and do not care about the location of the rest of facilities. There is no congestion. In this context, we observe that if a public decision is a Condorcet winner, then it satisfies nice properties of internal and external stability. Though in many contexts and for some preference profiles there may be no Condorcet winners, we study the extent to which stability can be made compatible with the requirement of choosing Condorcet winners whenever they exist.
Resumo:
The aim of this article is to assess the effects of several territorial characteristics, specifically agglomeration economies, on industrial location processes in the Spanish region of Catalonia. Theoretically, the level of agglomeration causes economies which favour the location of new establishments, but an excessive level of agglomeration might cause diseconomies, since congestion effects arise. The empirical evidence on this matter is inconclusive, probably because the models used so far are not suitable enough. We use a more flexible semiparametric specification, which allows us to study the nonlinear relationship between the different types of agglomeration levels and location processes. Our main statistical source is the REIC (Catalan Manufacturing Establishments Register), which has plant-level microdata on location of new industrial establishments. Keywords: agglomeration economies, industrial location, Generalized Additive Models, nonparametric estimation, count data models.
Resumo:
Este estudio explora las diferencias en los hábitos de consumo de sustancias psicoactivas, entre jóvenes de Barcelona y Bogotá. Evalúa la influencia de la percepción de riesgo sobre hábitos de consumo y estrategias de afrontamiento. Adicionalmente examina la influencia de la gravedad percibida de una situación estresante sobre estas últimas. Participaron 865 jóvenes de ambas ciudades, entre los 15 y los 18 años. Se utilizaron las variables de riesgo estudiadas por Benthin, Slovic y Severson (1993) para evaluar la percepción de riesgo. Los hábitos de consumo se evaluaron mediante la frecuencia, la intención de consumo, así como la edad de inicio. Se utilizó el CRI:Youth de Moos (1992) para determinar las estrategias de afrontamiento y la valoración del problema estresante. Se encontró que existen diferencias en la edad en que se inicia el consumo de alcohol y en la que se embriagan por primera vez según el género, la ciudad donde residen y la edad del adolescente. Los jóvenes de Barcelona tienen una propensión y un consumo real de marihuana y tabaco mayor que los jóvenes de Bogotá. Percibir placer o beneficios predice un incremento en la intención y la frecuencia de consumo de la mayoría de las sustancias. La facilidad para acceder a éstas sólo presenta una asociación con el uso frecuente del tabaco. Los datos sugieren que la gravedad percibida de estresores relativos a las drogas y la ciudad de residencia tienen un efecto sobre la utilización de las estrategias de evitación y aproximación cognitiva. Adicionalmente no se detectaron diferencias en función de las estrategias de afrontamiento empleadas según las variables de percepción de riesgo a excepción de la presión percibida, la cual aumenta el uso de la reevaluación del problema y la búsqueda de recompensas.
Resumo:
Las redes de interconexión juegan un papel importante en el rendimiento de los sistemas de altas prestaciones. Actualmente la gestión del encaminamiento de los mensajes es un factor determinante para mantener las prestaciones de la red. Nuestra propuesta es trabajar sobre un algoritmo de encaminamiento adaptativo, que distribuye el encaminamiento de los mensajes para evitar los problemas de congestión en las redes de interconexión, que aparecen por el gran volumen de comunicaciones de aplicaciones científicas ó comerciales. El objetivo es ajustar el algoritmo a una topología muy utilizada en los sistemas actuales como lo es el fat‐tree, e implementarlo en una tecnología Infiniband. En la experimentación realizada comparamos el método de control de congestión de la arquitectura Infiniband, con nuestro algoritmo. Los resultados obtenidos muestran que mejoramos los niveles de latencia por encima de un 50% y de throughput entre un 38% y un 81%.