31 resultados para Adverse selection, contract theory, experiment, principal-agent problem
em Aston University Research Archive
Resumo:
In this paper we examine the equilibrium states of finite amplitude flow in a horizontal fluid layer with differential heating between the two rigid boundaries. The solutions to the Navier-Stokes equations are obtained by means of a perturbation method for evaluating the Landau constants and through a Newton-Raphson iterative method that results from the Fourier expansion of the solutions that bifurcate above the linear stability threshold of infinitesimal disturbances. The results obtained from these two different methods of evaluating the convective flow are compared in the neighborhood of the critical Rayleigh number. We find that for small Prandtl numbers the discrepancy of the two methods is noticeable. © 2009 The Physical Society of Japan.
Resumo:
The aim of our paper is to examine whether Exchange Traded Funds (ETFs) diversify away the private information of informed traders. We apply the spread decomposition models of Glosten and Harris (1998) and Madhavan, Richardson and Roomans (1997) to a sample of ETFs and their control securities. Our results indicate that ETFs have significantly lower adverse selection costs than their control securities. This suggests that private information is diversified away for these securities. Our results therefore offer one explanation for the rapid growth in the ETF market.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs.
Resumo:
The thrust of the argument presented in this chapter is that inter-municipal cooperation (IMC) in the United Kingdom reflects local government's constitutional position and its exposure to the exigencies of Westminster (elected central government) and Whitehall (centre of the professional civil service that services central government). For the most part councils are without general powers of competence and are restricted in what they can do by Parliament. This suggests that the capacity for locally driven IMC is restricted and operates principally within a framework constructed by central government's policy objectives and legislation and the political expediencies of the governing political party. In practice, however, recent examples of IMC demonstrate that the practices are more complex than this initial analysis suggests. Central government may exert top-down pressures and impose hierarchical directives, but there are important countervailing forces. Constitutional changes in Scotland and Wales have shifted the locus of central- local relations away from Westminster and Whitehall. In England, the seeding of English government regional offices in 1994 has evolved into an important structural arrangement that encourages councils to work together. Within the local government community there is now widespread acknowledgement that to achieve the ambitious targets set by central government, councils are, by necessity, bound to cooperate and work with other agencies. In recent years, the fragmentation of public service delivery has affected the scope of IMC. Elected local government in the UK is now only one piece of a complex jigsaw of agencies that provides services to the public; whether it is with non-elected bodies, such as health authorities, public protection authorities (police and fire), voluntary nonprofit organisations or for-profit bodies, councils are expected to cooperate widely with agencies in their localities. Indeed, for projects such as regeneration and community renewal, councils may act as the coordinating agency but the success of such projects is measured by collaboration and partnership working (Davies 2002). To place these developments in context, IMC is an example of how, in spite of the fragmentation of traditional forms of government, councils work with other public service agencies and other councils through the medium of interagency partnerships, collaboration between organisations and a mixed economy of service providers. Such an analysis suggests that, following changes to the system of local government, contemporary forms of IMC are less dependent on vertical arrangements (top-down direction from central government) as they are replaced by horizontal modes (expansion of networks and partnership arrangements). Evidence suggests, however that central government continues to steer local authorities through the agency of inspectorates and regulatory bodies, and through policy initiatives, such as local strategic partnerships and local area agreements (Kelly 2006), thus questioning whether, in the case of UK local government, the shift from hierarchy to network and market solutions is less differentiated and transformation less complete than some literature suggests. Vertical or horizontal pressures may promote IMC, yet similar drivers may deter collaboration between local authorities. An example of negative vertical pressure was central government's change of the systems of local taxation during the 1980s. The new taxation regime replaced a tax on property with a tax on individual residency. Although the community charge lasted only a few years, it was a highpoint of the then Conservative government policy that encouraged councils to compete with each other on the basis of the level of local taxation. In practice, however, the complexity of local government funding in the UK rendered worthless any meaningful ambition of councils competing with each other, especially as central government granting to local authorities is predicated (however imperfectly) on at least notional equalisation between those areas with lower tax yields and the more prosperous locations. Horizontal pressures comprise factors such as planning decisions. Over the last quarter century, councils have competed on the granting of permission to out-of-town retail and leisure complexes, now recognised as detrimental to neighbouring authorities because economic forces prevail and local, independent shops are unable to compete with multiple companies. These examples illustrate tensions at the core of the UK polity of whether IMC is feasible when competition between local authorities heightened by local differences reduces opportunities for collaboration. An alternative perspective on IMC is to explore whether specific purposes or functions promote or restrict it. Whether in the principle areas of local government responsibilities relating to social welfare, development and maintenance of the local infrastructure or environmental matters, there are examples of IMC. But opportunities have diminished considerably as councils lost responsibility for services provision as a result of privatisation and transfer of powers to new government agencies or to central government. Over the last twenty years councils have lost their role in the provision of further-or higher-education, public transport and water/sewage. Councils have commissioning power but only a limited presence in providing housing needs, social care and waste management. In other words, as a result of central government policy, there are, in practice, currently far fewer opportunities for councils to cooperate. Since 1997, the New Labour government has promoted IMC through vertical drivers and the development; the operation of these policy initiatives is discussed following the framework of the editors. Current examples of IMC are notable for being driven by higher tiers of government, working with subordinate authorities in principal-agent relations. Collaboration between local authorities and intra-interand cross-sectoral partnerships are initiated by central government. In other words, IMC is shaped by hierarchical drivers from higher levels of government but, in practice, is locally varied and determined less by formula than by necessity and function. © 2007 Springer.
Resumo:
In Europe local authorities often work with their neighbouring municipalities, whether to address a specific task or goal or through the course of regular policy making and implementation. In England, however, inter-municipal co-operation (IMC) is less common. Councils may work with service providers from the private and non-profit sectors but less often with neighbouring local authorities. Why this is the case may be explained by a number of historical and policy factors that often encourage councils to compete, rather than to work collaboratively with each other. The present government has encouraged councils to work in partnership with other organizations but there are few examples of increased horizontal cooperation between local authorities. Instead the prevailing model remains fixed on vertical co-working predicated on a principal-agent relationship between higher and lower tiers of government.
Resumo:
Over the last 30 years, the field of problem structuring methods (PSMs) has been pioneered by a handful of 'gurus'—the most visible of whom have contributed other viewpoints to this special issue. As this generation slowly retires, it is opportune to survey the field and their legacy. We focus on the progress the community has made up to 2000, as work that started afterwards is ongoing and its impact on the field will probably only become apparent in 5–10 years time. We believe that up to 2000, research into PSMs was stagnating. We believe that this was partly due to a lack of new researchers penetrating what we call the 'grass-roots community'—the community which takes an active role in developing the theory and application of problem structuring. Evidence for this stagnation (or lack of development) is that, in 2000, many PSMs still relied heavily on the same basic methods proposed by the originators nearly 30 years earlier—perhaps only supporting those methods with computer software as a sign of development. Furthermore, no new methods had been integrated into the literature which suggests that revolutionary development, at least by academics, has stalled. We are pleased to suggest that from papers in this double special issue on PSMs this trend seems over because new authors report new PSMs and extend existing PSMs in new directions. Despite these recent developments of the methods, it is important to examine why this apparent stagnation took place. In the following sections, we identify and elaborate a number of reasons for it. We also consider the trends, challenges and opportunities that the PSM community will continue to face. Our aim is to evaluate the pre-2000 PSM community to encourage its revolutionary development post-2006 and offer directions for the long term sustainability of the field.
Resumo:
This work has used novel polymer design and fabrication technology to generate bead form polymer based systems, with variable, yet controlled release properties, specifically for the delivery of macromolecules, essentially peptides of therapeutic interest. The work involved investigation of the potential interaction between matrix ultrastructural morphology, in vitro release kinetics, bioactivity and immunoreactivity of selected macromolecules with limited hydrolytic stability, delivered from controlled release vehicles. The underlying principle involved photo-polymerisation of the monomer, hydroxyethyl methacrylate, around frozen ice crystals, leading to the production of a macroporous hydrophilic matrix. Bead form matrices were fabricated in controllable size ranges in the region of 100µm - 3mm in diameter. The initial stages of the project involved the study of how variables, delivery speed of the monomer and stirring speed of the non solvent, affectedthe formation of macroporous bead form matrices. From this an optimal bench system for bead production was developed. Careful selection of monomer, solvents, crosslinking agent and polymerisation conditions led to a variable but controllable distribution of pore sizes (0.5 - 4µm). Release of surrogate macromolecules, bovine serum albumin and FITC-linked dextrans, enabled factors relating to the size and solubility of the macromolecule on the rate of release to be studied. Incorporation of bioactive macromolecules allowed retained bioactivity to be determined (glucose oxidase and interleukin-2), whilst the release of insulin enabled determination of both bioactivity (using rat epididymal fat pad) and immunoreactivity (RIA). The work carried out has led to the generation of macroporous bead form matrices, fabricated from a tissue biocompatible hydrogel, capable of the sustained, controlled release of biologically active peptides, with potential use in the pharmaceutical and agrochemical industries.
Resumo:
This thesis focuses on three main questions. The first uses ExchangeTraded Funds (ETFs) to evaluate estimated adverse selection costs obtained spread decomposition models. The second compares the Probability of Informed Trading (PIN) in Exchange-Traded Funds to control securities. The third examines the intra-day ETF trading patterns. These spread decomposition models evaluated are Glosten and Harris (1988); George, Kaul, and Nimalendran (1991); Lin, Sanger, and Booth (1995); Madhavan, Richardson, and Roomans (1997); Huang and Stoll (1997). Using the characteristics of ETFs it is shown that only the Glosten and Harris (1988) and Madhavan, et al (1997) models provide theoretically consistent results. When the PIN measure is employed ETFs are shown to have greater PINs than control securities. The investigation of the intra-day trading patterns shows that return volatility and trading volume have a U-shaped intra-day pattern. A study of trading systems shows that ETFs on the American Stock Exchange (AMEX) have a U-shaped intra-day pattern of bid-ask spreads, while ETFs on NASDAQ do not. Specifically, ETFs on NASDAQ have higher bid-ask spreads at the market opening, then the lowest bid-ask spread in the middle of the day. At the close of the market, the bid-ask spread of ETFs on NASDAQ slightly elevated when compared to mid-day.
Resumo:
Electronic channel affiliates are important online intermediaries between customers and host retailers. However, no work has studied how online retailers control online intermediaries. By conducting an exploratory content analysis of 85 online contracts between online retailers and their online intermediaries, and categorizing the governing mechanisms used, insights into the unique aspects of the control of online intermediaries are presented. Findings regarding incentives, monitoring, and enforcement are presented. Additionally, testable research propositions are presented to guide further theory development, drawing on contract theory, resource dependence theory and agency theory. Managerial implications are discussed. © 2012 Elsevier Inc.
Resumo:
The aim of this paper is to examine the short term dynamics of foreign exchange rate spreads. Using a vector autoregressive model (VAR) we show that most of the variation in the spread comes from the long run dependencies between past and future spreads rather than being caused by changes in inventory, adverse selection, cost of carry or order processing costs. We apply the Integrated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to discover how often spread volatility changes. We find that spread volatility shifts are relatively uncommon and shifts in one currency spread tend not to spillover to other currency spreads. © 2013.
Resumo:
In this article we evaluate the most widely used spread decomposition models using Exchange Traded Funds (ETFs). These funds are an example of a basket security and allow the diversification of private information causing these securities to have lower adverse selection costs than individual securities. We use this feature as a criterion for evaluating spread decomposition models. Comparisons of adverse selection costs for ETF's and control securities obtained from spread decomposition models show that only the Glosten-Harris (1988) and the Madhavan-Richardson-Roomans (1997) models provide estimates of the spread that are consistent with the diversification of private information in a basket security. Our results are robust even after controlling for the stock exchange. © 2011 Copyright Taylor and Francis Group, LLC.
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.