919 resultados para Transaction-cost theory
Resumo:
In this paper, I examine Varian’s treatment of rent in his textbook on Microeconomics. I argue that he holds contradictory conceptions: sometimes rent is defined as surplus over cost whereas sometimes it is defined as cost, as the opportunity cost of fixed factors. I start by arguing that the distinction between fixed and variable factors is not the key for the definition of rent; ultimately, it is monopoly. Varian’s conception of rent is, essentially, Ricardo’s: rent is extraordinary profit turned rent. On the basis of a selfinconsistent notion of opportunity cost, Varian introduces the idea that rent is the opportunity cost of land, when what he actually defines is the opportunity cost of not renting the land. I also critically examine the related notion of “producer’s surplus”, and show that Varian’s treatment repeats the same contradiction as in rent.
Resumo:
On the analysis of Varian’s textbook on Microeconomics, which I take to be a representative of the standard view, I argue that Varian provides two contrary notions of profit, namely, profit as surplus over cost and profit as cost. Varian starts by defining profit as the surplus of revenues over cost and, thus, as the part of the value of commodities that is not any cost; however, he provides a second definition of profit as a cost, namely, as the opportunity cost of capital. I also argue that the definition of competitive profit as the opportunity cost of capital involves a self-contradictory notion of opportunity cost.
Resumo:
This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.
Resumo:
Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.
Resumo:
It is generally accepted that co-management systems are more cost-effective than centralized management of natural resources. However, no attempts have been made to empirically verify the transaction costs involved in fisheries co-management. Some estimates of transaction costs of fisheries co-management in San Salvador Island, Philippines, are presented in this paper. These estimates are used to compare the various transaction costs in co-managed and in centrally managed fisheries in San Salvador Island.
Resumo:
Future high speed communications networks will transmit data predominantly over optical fibres. As consumer and enterprise computing will remain the domain of electronics, the electro-optical conversion will get pushed further downstream towards the end user. Consequently, efficient tools are needed for this conversion and due to many potential advantages, including low cost and high output powers, long wavelength Vertical Cavity Surface Emitting Lasers (VCSELs) are a viable option. Drawbacks, such as broader linewidths than competing options, can be mitigated through the use of additional techniques such as Optical Injection Locking (OIL) which can require significant expertise and expensive equipment. This thesis addresses these issues by removing some of the experimental barriers to achieving performance increases via remote OIL. Firstly, numerical simulations of the phase and the photon and carrier numbers of an OIL semiconductor laser allowed the classification of the stable locking phase limits into three distinct groups. The frequency detuning of constant phase values (ø) was considered, in particular ø = 0 where the modulation response parameters were shown to be independent of the linewidth enhancement factor, α. A new method to estimate α and the coupling rate in a single experiment was formulated. Secondly, a novel technique to remotely determine the locked state of a VCSEL based on voltage variations of 2mV−30mV during detuned injection has been developed which can identify oscillatory and locked states. 2D & 3D maps of voltage, optical and electrical spectra illustrate corresponding behaviours. Finally, the use of directly modulated VCSELs as light sources for passive optical networks was investigated by successful transmission of data at 10 Gbit/s over 40km of single mode fibre (SMF) using cost effective electronic dispersion compensation to mitigate errors due to wavelength chirp. A widely tuneable MEMS-VCSEL was established as a good candidate for an externally modulated colourless source after a record error free transmission at 10 Gbit/s over 50km of SMF across a 30nm single mode tuning range. The ability to remotely set the emission wavelength using the novel methods developed in this thesis was demonstrated.
Resumo:
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant ('RI-LVL'), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.
Resumo:
The article presents cost modeling results from the application of the Genetic-Causal cost modeling principle. Industrial results from redesign are also presented to verify the opportunity for early concept cost optimization by using Genetic-Causal cost drivers to guide the conceptual design process for structural assemblies. The acquisition cost is considered through the modeling of the recurring unit cost and non-recurring design cost. The operational cost is modeled relative to acquisition cost and fuel burn for predominately metal or composites designs. The main contribution of this study is the application of the Genetic-Causal principle to the modeling of cost, helping to understand how conceptual design parameters impact on cost, and linking that to customer requirements and life cycle cost.
Resumo:
Hydrogenation reaction, as one of the simplest association reactions on surfaces, is of great importance both scientifically and technologically. They are essential steps in many industrial processes in heterogeneous catalysis, such as ammonia synthesis (N-2+3H(2)-->2NH(3)). Many issues in hydrogenation reactions remain largely elusive. In this work, the NHx (x=0,1,2) hydrogenation reactions (N+H-->NH, NH+H-->NH2 and NH2+H-->NH3) on Rh(111) are used as a model system to study the hydrogenation reactions on metal surfaces in general using density-functional theory. In addition, C and O hydrogenation (C+H-->CH and O+H-->OH) and several oxygenation reactions, i.e., C+O, N+O, O+O reactions, are also calculated in order to provide a further understanding of the barrier of association reactions. The reaction pathways and the barriers of all these reactions are determined and reported. For the C, N, NH, and O hydrogenation reactions, it is found that there is a linear relationship between the barrier and the valency of R (R=C, N, NH, and O). Detailed analyses are carried out to rationalize the barriers of the reactions, which shows that: (i) The interaction energy between two reactants in the transition state plays an important role in determining the trend in the barriers; (ii) there are two major components in the interaction energy: The bonding competition and the direct Pauli repulsion; and (iii) the Pauli repulsion effect is responsible for the linear valency-barrier trend in the C, N, NH, and O hydrogenation reactions. For the NH2+H reaction, which is different from other hydrogenation reactions studied, the energy cost of the NH2 activation from the IS to the TS is the main part of the barrier. The potential energy surface of the NH2 on metal surfaces is thus crucial to the barrier of NH2+H reaction. Three important factors that can affect the barrier of association reactions are generalized: (i) The bonding competition effect; (ii) the local charge densities of the reactants along the reaction direction; and (iii) the potential energy surface of the reactants on the surface. The lowest energy pathway for a surface association reaction should correspond to the one with the best compromise of these three factors. (C) 2003 American Institute of Physics.
Resumo:
Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.
Resumo:
For a digital echo canceller it is desirable to reduce the adaptation time, during which the transmission of useful data is not possible. LMS is a non-optimal algorithm in this case as the signals involved are statistically non-Gaussian. Walach and Widrow (IEEE Trans. Inform. Theory 30 (2) (March 1984) 275-283) investigated the use of a power of 4, while other research established algorithms with arbitrary integer (Pei and Tseng, IEEE J. Selected Areas Commun. 12(9)(December 1994) 1540-1547) or non-quadratic power (Shah and Cowan, IEE.Proc.-Vis. Image Signal Process. 142 (3) (June 1995) 187-191). This paper suggests that continuous and automatic, adaptation of the error exponent gives a more satisfactory result. The family of cost function adaptation (CFA) stochastic gradient algorithm proposed allows an increase in convergence rate and, an improvement of residual error. As special case the staircase CFA algorithm is first presented, then the smooth CFA is developed. Details of implementations are also discussed. Results of simulation are provided to show the properties of the proposed family of algorithms. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
This study presents a new method for determining the transmission network usage by loads and generators, which can then be used for transmission cost/loss allocation in an explainable and justifiable manner. The proposed method is based on solid physical grounds and circuit theory. It relies on dividing the currents through the network into two components; the first one is attributed to power flows from generators to loads, whereas the second one is because of the generators only. Unlike almost all the available methods, the proposed method is assumption free and hence it is more accurate than similar methods even those having some physical basis. The proposed method is validated through a transformer analogy, and theoretical derivations. The method is verified through application to the IEEE 30 bus system and the IEEE 118 test system. The results obtained verified many desirable features of the proposed method. Being more accurate in determining the network usage, in an explainable transparent manner, and in giving accurate cost signals, indicating the best locations to add loads and generation, are among the many desirable features.
Resumo:
Shared services are a popular reform for governments under financial pressure. The hope is to reduce overheads and increase efficiency by providing support services like HR, finance and procurement once to multiple agencies. Drawing on insights from organization theory and political science, we identify five risks that shared services won’t live up to current expectations. We illustrate each with empirical evidence from the UK, Ireland and further afield, and conclude with suggestions on how to manage these risks.
Resumo:
Local level planning requires statistics for small areas, but normally due to cost or logistic constraints, sample surveys are often planned to provide reliable estimates only for large geographical regions and large subgroups of a population.