99 resultados para interest costs
Resumo:
Con la mayor capacidad de los nodos de procesamiento en relación a la potencia de cómputo, cada vez más aplicaciones intensivas de datos como las aplicaciones de la bioinformática, se llevarán a ejecutar en clusters no dedicados. Los clusters no dedicados se caracterizan por su capacidad de combinar la ejecución de aplicaciones de usuarios locales con aplicaciones, científicas o comerciales, ejecutadas en paralelo. Saber qué efecto las aplicaciones con acceso intensivo a dados producen respecto a la mezcla de otro tipo (batch, interativa, SRT, etc) en los entornos no-dedicados permite el desarrollo de políticas de planificación más eficientes. Algunas de las aplicaciones intensivas de E/S se basan en el paradigma MapReduce donde los entornos que las utilizan, como Hadoop, se ocupan de la localidad de los datos, balanceo de carga de forma automática y trabajan con sistemas de archivos distribuidos. El rendimiento de Hadoop se puede mejorar sin aumentar los costos de hardware, al sintonizar varios parámetros de configuración claves para las especificaciones del cluster, para el tamaño de los datos de entrada y para el procesamiento complejo. La sincronización de estos parámetros de sincronización puede ser demasiado compleja para el usuario y/o administrador pero procura garantizar prestaciones más adecuadas. Este trabajo propone la evaluación del impacto de las aplicaciones intensivas de E/S en la planificación de trabajos en clusters no-dedicados bajo los paradigmas MPI y Mapreduce.
Resumo:
The aim of this paper is to discover the origins of utility regulation in Spain, and to analyse, from a microeconomic perspective, its characteristics and the impact of regulation on consumers and utilities. Madrid and the Madrilenian utilities are taken as a case study. The electric industry in the period studied was a natural monopoly2. Each of the three phases of production, generation, transmission and distribution, had natural monopoly characteristics. Therefore, the most efficient form to generate, transmit and distribute electricity was the monopoly because one firm can produce a quantity at a lower cost than the sum of costs incurred by two or more firms. A problem arises because when a firm is the single provider it can charge prices above the marginal cost, at monopoly prices. When a monopolist reduces the quantity produced, price increases, causing the consumer to demand less than the economic efficiency level, incurring a loss of consumer surplus. The loss of the consumer surplus is not completely gained by the monopolist, causing a loss of social surplus, a deadweight loss. The main objective of regulation is going to be to reduce to a minimum the deadweight loss. Regulation is also needed because when the monopolist fixes prices at marginal cost equal marginal revenue there would be an incentive for firms to enter the market creating inefficiency. The Madrilenian industry has been chosen because of the availability of statistical information on costs and production. The complex industry structure and the atomised demand add interest to the analysis. This study will also provide some light on the tariff regulation of the period which has been poorly studied and will complement the literature on the US electric utilities regulation where a different type of regulation was implemented.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la University of Groningen, Holanda, entre 2007 i 2009. La simulació directa de la turbulència (DNS) és una eina clau dins de la mecànica de fluids computacional. Per una banda permet conèixer millor la física de la turbulència i per l'altra els resultats obtinguts són claus per el desenvolupament dels models de turbulència. No obstant, el DNS no és una tècnica vàlida per a la gran majoria d'aplicacions industrials degut al elevats costos computacionals. Per tant, és necessari cert grau de modelització de la turbulència. En aquest context, s'han introduïts importants millores basades en la modelització del terme convectiu (no lineal) emprant symmetry-preserving regularizations. En tracta de modificar adequadament el terme convectiu a fi de reduir la producció d'escales més i més petites (vortex-stretching) tot mantenint tots els invariants de les equacions originals. Fins ara, aquest models s'han emprat amb èxit per nombres de Rayleigh (Ra) relativament elevats. En aquest punt, disposar de resultats DNS per a configuracions més complexes i nombres de Ra més elevats és clau. En aquest contexte, s'han dut a terme simulacions DNS en el supercomputador MareNostrum d'una Differentially Heated Cavity amb Ra=1e11 i Pr=0.71 durant el primer any dels dos que consta el projecte. A més a més, s'ha adaptat el codi a fi de poder simular el fluxe al voltant d'un cub sobre una pared amb Re=10000. Aquestes simulacions DNS són les més grans fetes fins ara per aquestes configuracions i la seva correcta modelització és un gran repte degut la complexitat dels fluxes. Aquestes noves simulacions DNS estan aportant nous coneixements a la física de la turbulència i aportant resultats indispensables per al progrés de les modelitzacións tipus symmetry-preserving regularization.
Resumo:
We study whether there is scope for using subsidies to smooth out barriers to R&D performance and expand the share of R&D firms in Spain. We consider a dynamic model with sunk entry costs in which firms’ optimal participation strategy is defined in terms of two subsidy thresholds that characterise entry and continuation. We compute the subsidy thresholds from the estimates of a dynamic panel data type-2 tobit model for an unbalanced panel of about 2,000 Spanish manufacturing firms. The results suggest that “extensive” subsidies are a feasible and efficient tool for expanding the share of R&D firms.
Resumo:
Calculating explicit closed form solutions of Cournot models where firms have private information about their costs is, in general, very cumbersome. Most authors consider therefore linear demands and constant marginal costs. However, within this framework, the nonnegativity constraint on prices (and quantities) has been ignored or not properly dealt with and the correct calculation of all Bayesian Nash equilibria is more complicated than expected. Moreover, multiple symmetric and interior Bayesianf equilibria may exist for an open set of parameters. The reason for this is that linear demand is not really linear, since there is a kink at zero price: the general ''linear'' inverse demand function is P (Q) = max{a - bQ, 0} rather than P (Q) = a - bQ.
Resumo:
Conflicts of interest between majority and minority stockholders affect a large proportion of firms in any economy, but has received little attention in the empirical literature. We examine the link between the potential for such conflicts and the firm's payout policy on a large sample of Norwegian private firms with controlling stockholders and detailed ownership data. Our evidence shows that the stronger the potential conflict between the stockholders, the higher the proportion of earnings paid out as dividends. This tendency to reduce stockholder conflicts by dividend payout is more pronounced when the minority is diffuse and when a family's majority block is held by a single family member. We also find evidence that a minority-friendly payout policy is associated with higher future minority investment in the firm. These results are consistent with the notion that potential agency costs of ownership are mitigated by dividend policy when the majority stockholder benefits from not exploiting the minority.
Resumo:
We investigate the effects of the financial crisis on the stationarity of real interest rates in the Euro Area. We use a new unit root test developed by Peseran et al. (2013) that allows for multiple unobserved factors in a panel set up. Our results suggest that while short-term and long-term real interest rates were stationary before the financial crisis, they became nonstationary during the crisis period likely due to persistent risk that characterized financial markets during that time. JEL codes: E43, C23. Keywords: Real interest rates, Euro Area, financial crisis, panel unit root tests, cross-sectional dependence.
Resumo:
We report evidence that salience may have economically signi.cant e¤ects on homeowners.borrowing behavior, through a bias in favour of less salient but more costly loans. Survey evidence corroborates the existence of such a bias. We outline a simple model in which some consumers are biased and show that under plausible assumptions this affects prices in equilibrium. Market data support the predictions of the model.
Resumo:
We analyse credit market equilibrium when banks screen loan applicants. When banks have a convex cost function of screening, a pure strategy equilibrium exists where banks optimally set interest rates at the same level as their competitors. This result complements Broecker s (1990) analysis, where he demonstrates that no pure strategy equilibrium exists when banks have zero screening costs. In our set up we show that interest rate on loans are largely independent of marginal costs, a feature consistent with the extant empirical evidence. In equilibrium, banks make positive profits in our model in spite of the threat of entry by inactive banks. Moreover, an increase in the number of active banks increases credit risk and so does not improve credit market effciency: this point has important regulatory implications. Finally, we extend our analysis to the case where banks have differing screening abilities.
Resumo:
In this paper we present a simple theory-based measure of the variations in aggregate economic efficiency: the gap between the marginal product of labor and the household s consumption/leisure tradeoff. We show that this indicator corresponds to the inverse of the markup of price over social marginal cost, and give some evidence in support of this interpretation. We then show that, with some auxilliary assumptions our gap variable may be used to measure the efficiency costs of business fluctuations. We find that the latter costs are modest on average. However, to the extent the flexible price equilibrium is distorted, the gross efficiency losses from recessions and gains from booms may be large. Indeed, we find that the major recessions involved large efficiency losses. These results hold for reasonable parameterizations of the Frisch elasticity of labor supply, the coefficient of relative risk aversion, and steady state distortions.
Resumo:
We study the effects of nominal debt on the optimal sequential choice of monetary policy. When the stock of debt is nominal, the incentive to generate unanticipated inflation increases the cost of the outstanding debt even if no unanticipated inflation episodes occur in equilibrium. Without full commitment, the optimal sequential policy is to deplete the outstanding stock of debt progressively until these extra costs disappear. Nominal debt is therefore a burden on monetary policy, not only because it must be serviced, but also because it creates a time inconsistency problem that distorts interest rates. The introduction of alternative forms of taxation may lessen this burden, if there is enough commtiment to fiscal policy. If there is full commitment to an optimal fiscal policy, then the resulting monetary policy is the Friedman rule of zero nominal interest rates.
Resumo:
Manipulation of government finances for the benefit of narrowly defined groups is usuallythought to be limited to the part of the budget over which politicians exercise discretion inthe short run, such as earmarks. Analyzing a revenue-sharing program between the centraland local governments in Brazil that uses an allocation formula based on local population estimates,I document two main results: first, that the population estimates entering the formulawere manipulated and second, that this manipulation was political in nature. Consistent withswing-voter targeting by the right-wing central government, I find that municipalities withroughly equal right-wing and non-right-wing vote shares benefited relative to opposition orconservative core support municipalities. These findings suggest that the exclusive focus ondiscretionary transfers in the extant empirical literature on special-interest politics may understatethe true scope of tactical redistribution that is going on under programmatic disguise.
Resumo:
We argue that in the development of the Western legal system, cognitive departures are themain determinant of the optimal degree of judicial rule-making. Judicial discretion, seen here as the main distinguishing feature between both legal systems, is introduced in civil law jurisdictions to protect, rather than to limit, freedom of contract against potential judicial backlash. Such protection was unnecessary in common law countries, where free-market relations enjoyed safer judicial ground mainly due to their relatively gradual evolution, their reliance on practitioners as judges, and the earlier development of institutional checks and balances that supported private property rights. In our framework, differences in costs and benefits associated with self-interest and lack of information require a cognitive failure to be active.
Resumo:
The spectacular failure of top-rated structured finance products has broughtrenewed attention to the conflicts of interest of Credit Rating Agencies (CRAs). We modelboth the CRA conflict of understating credit risk to attract more business, and the issuerconflict of purchasing only the most favorable ratings (issuer shopping), and examine theeffectiveness of a number of proposed regulatory solutions of CRAs. We find that CRAs aremore prone to inflate ratings when there is a larger fraction of naive investors in the marketwho take ratings at face value, or when CRA expected reputation costs are lower. To theextent that in booms the fraction of naive investors is higher, and the reputation risk forCRAs of getting caught understating credit risk is lower, our model predicts that CRAs aremore likely to understate credit risk in booms than in recessions. We also show that, due toissuer shopping, competition among CRAs in a duopoly is less efficient (conditional on thesame equilibrium CRA rating policy) than having a monopoly CRA, in terms of both totalex-ante surplus and investor surplus. Allowing tranching decreases total surplus further.We argue that regulatory intervention requiring upfront payments for rating services (beforeCRAs propose a rating to the issuer) combined with mandatory disclosure of any ratingproduced by CRAs can substantially mitigate the con.icts of interest of both CRAs andissuers.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.