961 resultados para Transaction cost theory
Resumo:
Conducts a strategic group mapping exercise by analysing R&D investment, sales/marketing cost and leadership information pertaining to the pharmaceuticals industry. Explains that strategic group mapping assists companies in identifying their principal competitors, and hence supports strategic decision-making, and shows that, in the pharmaceutical industry, R&D spending, the cost of sales and marketing, i.e. detailing, and technological leadership are mobility barriers to companies moving between sectors. Illustrates, in bubble-chart format, strategic groups in the pharmaceutical industry, plotting detailing-costs against the scale of activity in therapeutic areas. Places companies into 12 groups, and profiles the strategy and market-position similarities of the companies in each group. Concludes with three questions for companies to ask when evaluating their own, and their competitors, strategies and returns, and suggests that strategy mapping can be carried out in other industries, provided mobility barriers are identified.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
In Great Britain and Brazil healthcare is free at the point of delivery and based study only on citizenship. However, the British NHS is fifty-five years old and has undergone extensive reforms. The Brazilian SUS is barely fifteen years old. This research investigated the middle management mediation role within hospitals comparing managerial planning and control using cost information in Great Britain and Brazil. This investigation was conducted in two stages entailing quantitative and qualitative techniques. The first stage was a survey involving managers of 26 NHS Trusts in Great Britain and 22 public hospitals in Brazil. The second stage consisted of interviews, 10 in Great Britain and 22 in Brazil, conducted in four selected hospitals, two in each country. This research builds on the literature by investigating the interaction of contingency theory and modes of governance in a cross-national study in terms of public hospitals. It further builds on the existing literature by measuring managerial dimensions related to cost information usefulness. The project unveils the practice involved in planning and control processes. It highlights important elements such as the use of predictive models and uncertainty reduction when planning. It uncovers the different mechanisms employed on control processes. It also depicts that planning and control within British hospitals are structured procedures and guided by overall goals. In contrast, planning and control processes in Brazilian hospitals are accidental, involving more ad hoc actions and a profusion of goals. The clinicians in British hospitals have been integrated into the management hierarchy. Their use of cost information in planning and control processes reflects this integration. However, in Brazil, clinicians have been shown to operate more independently and make little use of cost information but the potential signalled for cost information use is seen to be even greater than that of their British counterparts.
Resumo:
Meta-analysis was used to quantify how well the Theories of Reasoned Action and Planned Behaviour have predicted intentions to attend screening programmes and actual attendance behaviour. Systematic literature searches identified 33 studies that were included in the review. Across the studies as a whole, attitudes had a large-sized relationship with intention, while subjective norms and perceived behavioural control (PBC) possessed medium-sized relationships with intention. Intention had a medium-sized relationship with attendance, whereas the PBC-attendance relationship was small sized. Due to heterogeneity in results between studies, moderator analyses were conducted. The moderator variables were (a) type of screening test, (b) location of recruitment, (c) screening cost and (d) invitation to screen. All moderators affected theory of planned behaviour relationships. Suggestions for future research emerging from these results include targeting attitudes to promote intention to screen, a greater use of implementation intentions in screening information and examining the credibility of different screening providers.
Resumo:
Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
We apply cooperative game theory concepts to analyze a Holt-Modigliani-Muth-Simon (HMMS) supply chain. The bullwhip effect in a two-stage supply chain (supplier-manufacturer) in the framework of the HMMS-model with quadratic cost functions is considered. It is assumed that both firms minimize their relevant costs, and two cases are examined: the supplier and the manufacturer minimize their relevant costs in a decentralized and in a centralized (cooperative) way. The question of how to share the savings of the decreased bullwhip effect in the centralized (cooperative) model is answered by the weighted Shapley value, by a transferable utility cooperative game theory tool, where the weights are for the exogenously given “bargaining powers” of the participants of the supply chain. = A cikkben a kooperatív játékelmélet fogalmait alkalmazzuk egy Holt-Mogigliani-Muth-Simon-típusú ellátási lánc esetében. Az ostorcsapás-hatás elemeit egy beszállító-termelő ellátási láncban ragadjuk meg egy kvadratikus készletezési és termelési költség mellett. Feltételezzük, hogy mindkét vállalat minimalizálja a releváns költségeit. Két működési rendszert hasonlítunk össze: egy hierarchikus döntéshozatali rendszert, amikor először a termelő, majd a beszállító optimalizálja helyzetét, majd egy centralizált (kooperatív) modellt, amikor a vállalatok az együttes költségüket minimalizálják. A kérdés úgy merül fel, hogy a csökkentett ostorcsapás-hatás esetén hogyan osszák meg a részvevők ebben a transzferálható hasznosságú kooperatív játékban a költség megtakarítást, exogén módon adott tárgyalási pozíció mellett.
Resumo:
Purpose – The paper aims to explore the gap between theory and practice in foresight and to give some suggestions on how to reduce it. Design/methodology/approach – Analysis of practical foresight activities and suggestions are based on a literature review, the author's own research and practice in the field of foresight and futures studies, and her participation in the work of a European project (COST A22). Findings – Two different types of practical foresight activities have developed. One of them, the practice of foresight of critical futures studies (FCFS) is an application of a theory of futures studies. The other, termed here as praxis foresight (PF), has no theoretical basis and responds directly to practical needs. At present a gap can be perceived between theory and practice. PF distinguishes itself from the practice and theory of FCFS and narrows the construction space of futures. Neither FCFS nor PF deals with content issues of the outer world. Reducing the gap depends on renewal of joint discourses and research about experience of different practical foresight activities and manageability of complex dynamics in foresight. Production and feedback of self-reflective and reflective foresight knowledge could improve theory and practice. Originality/value – Contemporary practical foresight activities are analysed and suggestions to reduce the gap are developed in the context of the linkage between theory and practice. This paper is thought provoking for futurists, foresight managers and university researchers.
Resumo:
Carbon pricing policy is a fundamental humanly devised theoretical and practical cornerstone in the fight against climate change. It involves short term and long term policies, theoretical and practical considerations. A quantitative global stabilisation target range for the stock of greenhouse gases in the atmosphere is needed, because it is an important and useful foundation in the shaping of a comprehensive climate pricing policy. A global stabilisation target range is obviously a long term policy to control climate change and events ensuing excessive increase in temperature. Setting long term objectives in the fight against climate change are substantial in avoiding catastrophic consequences therefore short term policies, which aim advances in emission reductions, have to be consistent with the pre-defined long term stabilisation goals. Short term policy reaction means using price-driven instruments like taxes and tradable quotas. These instruments allow broad flexibility in the parameters of emission reduction, and provide opportunities and incentives wherewith the cost of mitigation and abatement can be kept down. Taxes and tradable quotas give the flexibility in how, where and when emission reduction can be accomplished thereby reaching agreements between states and companies may result an appropriate and environment-conscious emission scheme, that can fit into the long term objectives.
Resumo:
A cikk Oliver Hart és szerzőtársai modelljeinek következtetéseit hasonlítja össze Williamson tranzakciós költségekre vonatkozó nézeteivel. Megmutatja, hogy a két irányzat a vállalat vagy piac kérdéskörében más eszközöket használ, de hasonlóan érvel. Megismerkedhetünk Williamson Harttal szemben megfogalmazott azon kritikájával, hogy Hart modelljeiben az alkunak nincsenek tranzakciós költségei, illetve a kritika kritikájával is. Hart elképzeléseit támasztja alá a tulajdonjogi irányzaton belül nemrégiben kialakult referenciapont-elmélet, amely kísérleti lehetőségeket is nyújt a különböző feltételezések igazolására. ____ The article compares the conclusions from the models of Oliver Hart et al. with the views of Williamson on transaction costs. It shows that the two schools use different means on the question of the firm or the market, but similar reasoning. The author covers Williamson's criticism of Hart that there are no transaction costs in his models, and also the criticism of that criticism. Hart's notions are supported by the recently developed theory of reference point within the property-right trend, which offers chances of experimental proof of the various assumptions.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
The purpose of the study was to explore how a public, IT services transferor, organization, comprised of autonomous entities, can effectively develop and organize its data center cost recovery mechanisms in a fair manner. The lack of a well-defined model for charges and a cost recovery scheme could cause various problems. For example one entity may be subsidizing the costs of another entity(s). Transfer pricing is in the best interest of each autonomous entity in a CCA. While transfer pricing plays a pivotal role in the price settings of services and intangible assets, TCE focuses on the arrangement at the boundary between entities. TCE is concerned with the costs, autonomy, and cooperation issues of an organization. The theory is concern with the factors that influence intra-firm transaction costs and attempting to manifest the problems involved in the determination of the charges or prices of the transactions. This study was carried out, as a single case study, in a public organization. The organization intended to transfer the IT services of its own affiliated public entities and was in the process of establishing a municipal-joint data center. Nine semi-structured interviews, including two pilot interviews, were conducted with the experts and managers of the case company and its affiliating entities. The purpose of these interviews was to explore the charging and pricing issues of the intra-firm transactions. In order to process and summarize the findings, this study employed qualitative techniques with the multiple methods of data collection. The study, by reviewing the TCE theory and a sample of transfer pricing literature, created an IT services pricing framework as a conceptual tool for illustrating the structure of transferring costs. Antecedents and consequences of the transfer price based on TCE were developed. An explanatory fair charging model was eventually developed and suggested. The findings of the study suggested that the Chargeback system was inappropriate scheme for an organization with affiliated autonomous entities. The main contribution of the study was the application of TP methodologies in the public sphere with no tax issues consideration.
Resumo:
To meet electricity demand, electric utilities develop growth strategies for generation, transmission, and distributions systems. For a long time those strategies have been developed by applying least-cost methodology, in which the cheapest stand-alone resources are simply added, instead of analyzing complete portfolios. As a consequence, least-cost methodology is biased in favor of fossil fuel-based technologies, completely ignoring the benefits of adding non-fossil fuel technologies to generation portfolios, especially renewable energies. For this reason, this thesis introduces modern portfolio theory (MPT) to gain a more profound insight into a generation portfolio’s performance using generation cost and risk metrics. We discuss all necessary assumptions and modifications to this finance technique for its application within power systems planning, and we present a real case of analysis. Finally, the results of this thesis are summarized, pointing out the main benefits and the scope of this new tool in the context of electricity generation planning.
Resumo:
In the past few years, there has been a concern among economists and policy makers that increased openness to international trade affects some regions in a country more than others. Recent research has found that local labor markets more exposed to import competition through their initial employment composition experience worse outcomes in several dimensions such as, employment, wages, and poverty. Although there is evidence that regions within a country exhibit variation in the intensity with which they trade with each other and with other countries, trade linkages have been ignored in empirical analyses of the regional effects of trade, which focus on differences in employment composition. In this dissertation, I investigate how local labor markets' trade linkages shape the response of wages to international trade shocks. In the second chapter, I lay out a standard multi-sector general equilibrium model of trade, where domestic regions trade with each other and with the rest of the world. Using this benchmark, I decompose a region's wage change resulting from a national import cost shock into a direct effect on prices, holding other endogenous variables constant, and a series of general equilibrium effects. I argue the direct effect provides a natural measure of exposure to import competition within the model since it summarizes the effect of the shock on a region's wage as a function of initial conditions given by its trade linkages. I call my proposed measure linkage exposure while I refer to the measures used in previous studies as employment exposure. My theoretical analysis also shows that the assumptions previous studies make on trade linkages are not consistent with the standard trade model. In the third chapter, I calibrate the model to the Brazilian economy in 1991--at the beginning of a period of trade liberalization--to perform a series of experiments. In each of them, I reduce the Brazilian import cost by 1 percent in a single sector and I calculate how much of the cross-regional variation in counterfactual wage changes is explained by exposure measures. Over this set of experiments, employment exposure explains, for the median sector, 2 percent of the variation in counterfactual wage changes while linkage exposure explains 44 percent. In addition, I propose an estimation strategy that incorporates trade linkages in the analysis of the effects of trade on observed wages. In the model, changes in wages are completely determined by changes in market access, an endogenous variable that summarizes the real demand faced by a region. I show that a linkage measure of exposure is a valid instrument for changes in market access within Brazil. By using observed wage changes in Brazil between 1991-2000, my estimates imply that a region at the 25th percentile of the change in domestic market access induced by trade liberalization, experiences a 0.6 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. The estimates from a regression of wages changes on exposure imply that a region at the 25th percentile of exposure experiences a 3 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. I conclude that estimates based on exposure overstate the negative impact of trade liberalization on wages in Brazil. In the fourth chapter, I extend the standard model to allow for two types of workers according to their education levels: skilled and unskilled. I show that there is substantial variation across Brazilian regions in the skill premium. I use the exogenous variation provided by tariff changes to estimate the impact of market access on the skill premium. I find that decreased domestic market access resulting from trade liberalization resulted in a higher skill premium. I propose a mechanism to explain this result: that the manufacturing sector is relatively more intensive in unskilled labor and I show empirical evidence that supports this hypothesis.
Resumo:
International audience