29 resultados para heterogeneous catalyst
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We analyze the incentives for cooperation of three players differing in their efficiency of effort in a contest game. We concentrate on the non-cooperative bargaining foundation of coalition formation, and therefore, we adopt a two-stage model. In the first stage, individuals form coalitions following a bargaining protocol similar to the one proposed by Gul (1989). Afterwards, coalitions play the contest game of Esteban and Ray (1999) within the resulting coalition structure of the first stage. We find that the grand coalition forms whenever the distribution of the bargaining power in the coalition formation game is equal to the distribution of the relative efficiency of effort. Finally, we use the case of equal bargaining power for all individuals to show that other types of coalition structures may be observed as well.
Resumo:
Report for the scientific sojourn carried out at the Department of Chemistry University of North Texas (USA) from September until November 2006. It includes the performance of two computational chemistry studies: an experimental and computational study toward the intra- and intermolecular hydroarylation of isonitriles and the development of an improved catalyst for hydrocarbon functionalization.
Resumo:
It has been recently emphasized that, if individuals have heterogeneous dynamics, estimates of shock persistence based on aggregate data are significatively higher than those derived from its disaggregate counterpart. However, a careful examination of the implications of this statement on the various tools routinely employed to measure persistence is missing in the literature. This paper formally examines this issue. We consider a disaggregate linear model with heterogeneous dynamics and compare the values of several measures of persistence across aggregation levels. Interestingly, we show that the average persistence of aggregate shocks, as measured by the impulse response function (IRF) of the aggregate model or by the average of the individual IRFs, is identical on all horizons. This result remains true even in situations where the units are (short-memory) stationary but the aggregate process is long-memory or even nonstationary. In contrast, other popular persistence measures, such as the sum of the autoregressive coefficients or the largest autoregressive root, tend to be higher the higher the aggregation level. We argue, however, that this should be seen more as an undesirable property of these measures than as evidence of different average persistence across aggregation levels. The results are illustrated in an application using U.S. inflation data.
Resumo:
A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred
Resumo:
We present a study of the continuous-time equations governing the dynamics of a susceptible infected-susceptible model on heterogeneous metapopulations. These equations have been recently proposed as an alternative formulation for the spread of infectious diseases in metapopulations in a continuous-time framework. Individual-based Monte Carlo simulations of epidemic spread in uncorrelated networks are also performed revealing a good agreement with analytical predictions under the assumption of simultaneous transmission or recovery and migration processes
Resumo:
We present the derivation of the continuous-time equations governing the limit dynamics of discrete-time reaction-diffusion processes defined on heterogeneous metapopulations. We show that, when a rigorous time limit is performed, the lack of an epidemic threshold in the spread of infections is not limited to metapopulations with a scale-free architecture, as it has been predicted from dynamical equations in which reaction and diffusion occur sequentially in time
Resumo:
The front speed problem for nonuniform reaction rate and diffusion coefficient is studied by using singular perturbation analysis, the geometric approach of Hamilton-Jacobi dynamics, and the local speed approach. Exact and perturbed expressions for the front speed are obtained in the limit of large times. For linear and fractal heterogeneities, the analytic results have been compared with numerical results exhibiting a good agreement. Finally we reach a general expression for the speed of the front in the case of smooth and weak heterogeneities
Resumo:
Background: Systematic approaches for identifying proteins involved in different types of cancer are needed. Experimental techniques such as microarrays are being used to characterize cancer, but validating their results can be a laborious task. Computational approaches are used to prioritize between genes putatively involved in cancer, usually based on further analyzing experimental data. Results: We implemented a systematic method using the PIANA software that predicts cancer involvement of genes by integrating heterogeneous datasets. Specifically, we produced lists of genes likely to be involved in cancer by relying on: (i) protein-protein interactions; (ii) differential expression data; and (iii) structural and functional properties of cancer genes. The integrative approach that combines multiple sources of data obtained positive predictive values ranging from 23% (on a list of 811 genes) to 73% (on a list of 22 genes), outperforming the use of any of the data sources alone. We analyze a list of 20 cancer gene predictions, finding that most of them have been recently linked to cancer in literature. Conclusion: Our approach to identifying and prioritizing candidate cancer genes can be used to produce lists of genes likely to be involved in cancer. Our results suggest that differential expression studies yielding high numbers of candidate cancer genes can be filtered using protein interaction networks.
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
We study the quantitative properties of a dynamic general equilibrium model in which agents face both idiosyncratic and aggregate income risk, state-dependent borrowing constraints that bind in some but not all periods and markets are incomplete. Optimal individual consumption-savings plans and equilibrium asset prices are computed under various assumptions about income uncertainty. Then we investigate whether our general equilibrium model with incomplete markets replicates two empirical observations: the high correlation between individual consumption and individual income, and the equity premium puzzle. We find that, when the driving processes are calibrated according to the data from wage income in different sectors of the US economy, the results move in the direction of explaining these observations, but the model falls short of explaining the observed correlations quantitatively. If the incomes of agents are assumed independent of each other, the observations can be explained quantitatively.
Resumo:
Protectionism enjoys surprising popular support, in spite of deadweight losses. At thesame time, trade barriers appear to decline with public information about protection.This paper develops an electoral model with heterogeneously informed voters whichexplains both facts and predicts the pattern of trade policy across industries. In themodel, each agent endogenously acquires more information about his sector of employment. As a result, voters support protectionism, because they learn more about thetrade barriers that help them as producers than those that hurt them as consumers.In equilibrium, asymmetric information induces a universal protectionist bias. Thestructure of protection is Pareto inefficient, in contrast to existing models. The modelpredicts a Dracula effect: trade policy for a sector is less protectionist when there ismore public information about it. Using a measure of newspaper coverage across industries, I find that cross-sector evidence from the United States bears out my theoreticalpredictions.
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.