934 resultados para merger authorisation
Resumo:
This article argues against the merger folklore that maintains that a merger negatively affects well-being and work attitudes primarily through the threat of job insecurity. We hold that the workplace is not only a resource for fulfilling a person's financial needs, but that it is an important component of the self-concept in terms of identification with the organization, as explained by social identity theory. We unravel the key concepts of the social identity approach relevant to the analysis of mergers and review evidence from previous studies. Then, we present a study conducted during a merger to substantiate our ideas about the effects of post-merger organizational identification above and beyond the effects of perceived job insecurity. We recommend that managers should account for these psychological effects through the provision of continuity and specific types of communication. © 2006 British Academy of Management.
Resumo:
This paper shows that many structural remedies in a sample of European merger cases result in market structures which would probably not be cleared by the Competition Authority (CA) if they were the result of merger (rather than remedy).This is explained by the fact that the CA’s objective through remedy is to restore premerger competition, but markets are often highly concentrated even before merger. If so, the CA must often choose between clearing an ‘uncompetitive’merger, or applying an unsatisfactory remedy. Here, the CA appears reluctant to intervene against coordinated effects, if doing so enhances a leader’s dominance.
Resumo:
This paper estimates the implicit model, especially the roles of size asymmetries and firm numbers, used by the European Commission to identify mergers with coordinated effects. This subset of cases offers an opportunity to shed empirical light on the conditions where a Competition Authority believes tacit collusion is most likely to arise. We find that, for the Commission, tacit collusion is a rare phenomenon, largely confined to markets of two, more or less symmetric, players. This is consistent with recent experimental literature, but contrasts with the facts on ‘hard-core’ collusion in which firm numbers and asymmetries are often much larger.
Resumo:
The nature of tacitly collusive behaviour often makes coordination unstable, and this may result in periods of breakdown, during which consumers benet from reduced prices. This is allowed for by adding demand uncertainty to the Compte et al. (2002) model of tacit collusion amongst asymmetric rms. Breakdowns occur when a rm cannot exclude the possibility of a deviation by a rival. It is then possible that an outcome with collusive behaviour, subject to long/frequent break downs, can improve consumer welfare compared to an alternative with sustained unilateral conduct. This is illustrated by re-examining the Nestle/Perrier merger analyzed by Compte et al., but now also taking into account the potential for welfare losses arising from unilateral behaviour.
Resumo:
Previous empirical assessments of the effectiveness of structural merger remedies have focused mainly on the subsequent viability of the divested assets. Here, we take a different approach by examining how competitive are the market structures which result from the divestments. We employ a tightly specified sample of markets in which the European Commission (EC) has imposed structural merger remedies. It has two key features: (i) it includes all mergers in which the EC appears to have seriously considered, simultaneously, the possibility of collective dominance, as well as single dominance; (ii) in a previous paper, for the same sample, we estimated a model which proved very successful in predicting the Commission’s merger decisions, in terms of the market shares of the leading firms. The former allows us to explore the choices between alternative theories of harm, and the latter provides a yardstick for evaluating whether markets are competitive or not – at least in the eyes of the Commission. Running the hypothetical post-remedy market shares through the model, we can predict whether the EC would have judged the markets concerned to be competitive, had they been the result of a merger rather than a remedy. We find that a significant proportion were not competitive in this sense. One explanation is that the EC has simply been inconsistent – using different criteria for assessing remedies from those for assessing the mergers in the first place. However, a more sympathetic – and in our opinion, more likely – explanation is that the Commission is severely constrained by the pre-merger market structures in many markets. We show that, typically, divestment remedies return the market to the same structure as existed before the proposed merger. Indeed, one can argue that any competition authority should never do more than this. Crucially, however, we find that this pre-merger structure is often itself not competitive. We also observe an analogous picture in a number of markets where the Commission chose not to intervene: while the post-merger structure was not competitive, nor was the pre-merger structure. In those cases, however, the Commission preferred the former to the latter. In effect, in both scenarios, the EC was faced with a no-win decision. This immediately raises a follow-up question: why did the EC intervene for some, but not for others – given that in all these cases, some sort of anticompetitive structure would prevail? We show that, in this sample at least, the answer is often tied to the prospective rank of the merged firm post-merger. In particular, in those markets where the merged firm would not be the largest post-merger, we find a reluctance to intervene even where the resulting market structure is likely to be conducive to collective dominance. We explain this by a willingness to tolerate an outcome which may be conducive to tacit collusion if the alternative is the possibility of an enhanced position of single dominance by the market leader. Finally, because the sample is confined to cases brought under the ‘old’ EC Merger Regulation, we go on to consider how, if at all, these conclusions require qualification following the 2004 revisions, which, amongst other things, made interventions for non-coordinated behaviour possible without requiring that the merged firm be a dominant market leader. Our main conclusions here are that the Commission appears to have been less inclined to intervene in general, but particularly for Collective Dominance (or ‘coordinated effects’ as it is now known in Europe as well as the US.) Moreover, perhaps contrary to expectation, where the merged firm is #2, the Commission has to date rarely made a unilateral effects decision and never made a coordinated effects decision.
Resumo:
The purpose of this paper is to identify empirically the implicit structural model, especially the roles of size asymmetries and concentration, used by the European Commission to identify mergers with coordinated effects (i.e. collective dominance). Apart from its obvious policy-relevance, the paper is designed to shed empirical light on the conditions under which tacit collusion is most likely. We construct a database relating to 62 candidate mergers and find that, in the eyes of the Commission, tacit collusion in this context virtually never involves more than two firms and requires close symmetry in the market shares of the two firms.
Resumo:
The nature of tacitly collusive behaviour often makes coordination unstable, and this may result in periods of breakdown, during which consumers benefit from reduced prices. This is allowed for by adding demand uncertainty to the Compte et al. (2002) model of tacit collusion amongst asymmetric firms. Breakdowns occur when a firm cannot exclude the possibility of a deviation by a rival. It is then possible that an outcome with collusive behaviour, subject to long/frequent break downs, can improve consumer welfare compared to an alternative with sustained unilateral conduct. This is illustrated by re-examining the Nestle/Perrier merger analyzed by Compte et al., but now also taking into account the potential for welfare losses arising from unilateral behaviour.
Resumo:
In the discussion - Industry Education: The Merger Continues - by Rob Heiman Assistant Professor Hospitality Food Service Management at Kent State University, the author originally declares, “Integrating the process of an on-going catering and banquet function with that of selected behavioral academic objectives leads to an effective, practical course of instruction in catering and banquet management. Through an illustrated model, this article highlights such a merger while addressing a variety of related problems and concerns to the discipline of hospitality food service management education.” The article stresses the importance of blending the theoretical; curriculum based learning process with that of a hands-on approach, in essence combining an in-reality working program, with academics, to develop a well rounded hospitality student. “How many programs are enjoying the luxury of excessive demand for students from industry [?],” the author asks in proxy for, and to highlight the immense need for qualified personnel in the hospitality industry. As the author describes it, “An ideal education program concerns itself with the integration of theory and simulation with hands-on experience to teach the cognitive as well as the technical skills required to achieve the pre-determined hospitality education objectives.” In food service one way to achieve this integrated learning curve is to have the students prepare foods and then consume them. Heiman suggests this will quickly illustrate to students the rights and wrongs of food preparation. Another way is to have students integrating the academic program with feeding the university population. Your author offers more illustrations on similar principles. Heiman takes special care in characterizing the banquet and catering portions of the food service industry, and he offers empirical data to support the descriptions. It is in these areas, banquet and catering, that Heiman says special attention is needed to produce qualified students to those fields. This is the real focus of the discussion, and it is in this venue that the remainder of the article is devoted. “Based on the perception that quality education is aided by implementing project assignments through the course of study in food service education, a model description can be implemented for a course in Catering and Banquet Management and Operations. This project model first considers the prioritized objectives of education and industry and then illustrates the successful merging of resources for mutual benefits,” Heiman sketches. The model referred to above is also the one aforementioned in the thesis statement at the beginning of the article. This model is divided into six major components; Heiman lists and details them. “The model has been tested through two semesters involving 29 students,” says Heiman. “Reaction by all participants has been extremely positive. Recent graduates of this type of program have received a sound theoretical framework and demonstrated their creative interpretation of this theory in practical application,” Heiman says in summation.
Resumo:
The purpose of this study is to identify research trends in Merger and Acquisition waves in the restaurant industry and propose future research directions by thoroughly reviewing existing Merger and Acquisition related literature. Merger and Acquisition has been extensively used as a strategic management tool for fast growth in the restaurant industry. However, there has been a very limited amount of literature that focuses on Merger & Acquisition in the restaurant industry. Particular, no known study has been identified that examined M&A wave and its determinants. A good understanding of determinants of M&A wave will help practitioners identify important factors that should be considered before making M&A decisions and predict the optimal timing for successful M&A transactions. This study examined literature on six U.S M&A waves and their determinants and summarized main explanatory factors examined, statistical methods, and theoretical frameworks. Inclusion of unique macroeconomic factors of the restaurant industry and the use of factor analysis are suggested for future research.
Resumo:
iPTF14atg, a subluminous peculiar Type Ia supernova (SN Ia) similar to SN 2002es, is the first SN Ia for which a strong UV flash was observed in the early-time light curves. This has been interpreted as evidence for a single-degenerate (SD) progenitor system, where such a signal is expected from interactions between the SN ejecta and the non-degenerate companion star. Here, we compare synthetic observables of multidimensional state-of-the-art explosion models for different progenitor scenarios to the light curves and spectra of iPTF14atg. From our models, we have difficulties explaining the spectral evolution of iPTF14atg within the SD progenitor channel. In contrast, we find that a violent merger of two carbon-oxygen white dwarfs with 0.9 and 0.76 M⊙, respectively, provides an excellent match to the spectral evolution of iPTF14atg from 10 d before to several weeks after maximum light. Our merger model does not naturally explain the initial UV flash of iPTF14atg. We discuss several possibilities like interactions of the SN ejecta with the circumstellar medium and surface radioactivity from an He-ignited merger that may be able to account for the early UV emission in violent merger models.
Resumo:
In the context of ƒ (R) gravity theories, we show that the apparent mass of a neutron star as seen from an observer at infinity is numerically calculable but requires careful matching, first at the star’s edge, between interior and exterior solutions, none of them being totally Schwarzschild-like but presenting instead small oscillations of the curvature scalar R; and second at large radii, where the Newtonian potential is used to identify the mass of the neutron star. We find that for the same equation of state, this mass definition is always larger than its general relativistic counterpart. We exemplify this with quadratic R^2 and Hu-Sawicki-like modifications of the standard General Relativity action. Therefore, the finding of two-solar mass neutron stars basically imposes no constraint on stable ƒ (R) theories. However, star radii are in general smaller than in General Relativity, which can give an observational handle on such classes of models at the astrophysical level. Both larger masses and smaller matter radii are due to much of the apparent effective energy residing in the outer metric for scalar-tensor theories. Finally, because the ƒ (R) neutron star masses can be much larger than General Relativity counterparts, the total energy available for radiating gravitational waves could be of order several solar masses, and thus a merger of these stars constitutes an interesting wave source.
Resumo:
This paper generalizes the model of Salant et al. (1983; Quarterly Journal of Economics, Vol. 98, pp. 185–199) to a successive oligopoly model with product differentiation. Upstream firms produce differentiated goods, retailers compete in quantities, and supply contracts are linear. We show that if retailers buy from all producers, downstream mergers do not affect wholesale prices. Our result replicates that of Salant's, where mergers are not profitable unless the size of the merged firm exceeds 80 per cent of the industry. This result is robust to the type of competition.
Resumo:
Queensland University of Technology (QUT) is faced with a rapidly growing research agenda built upon a strategic research capacity-building program. This presentation will outline the results of a project that has recently investigated QUT’s research support requirements and which has developed a model for the support of eResearch across the university. QUT’s research building strategy has produced growth at the faculty level and within its research institutes. This increased research activity is pushing the need for university-wide eResearch platforms capable of providing infrastructure and support in areas such as collaboration, data, networking, authentication and authorisation, workflows and the grid. One of the driving forces behind the investigation is data-centric nature of modern research. It is now critical that researchers have access to supported infrastructure that allows the collection, analysis, aggregation and sharing of large data volumes for exploration and mining in order to gain new insights and to generate new knowledge. However, recent surveys into current research data management practices by the Australian Partnership for Sustainable Repositories (APSR) and by QUT itself, has revealed serious shortcomings in areas such as research data management, especially its long term maintenance for reuse and authoritative evidence of research findings. While these internal university pressures are building, at the same time there are external pressures that are magnifying them. For example, recent compliance guidelines from bodies such as the ARC, and NHMRC and Universities Australia indicate that institutions need to provide facilities for the safe and secure storage of research data along with a surrounding set of policies, on its retention, ownership and accessibility. The newly formed Australian National Data Service (ANDS) is developing strategies and guidelines for research data management and research institutions are a central focus, responsible for managing and storing institutional data on platforms that can be federated nationally and internationally for wider use. For some time QUT has recognised the importance of eResearch and has been active in a number of related areas: ePrints to digitally publish research papers, grid computing portals and workflows, institutional-wide provisioning and authentication systems, and legal protocols for copyright management. QUT also has two widely recognised centres focused on fundamental research into eResearch itself: The OAK LAW project (Open Access to Knowledge) which focuses upon legal issues relating eResearch and the Microsoft QUT eResearch Centre whose goal is to accelerate scientific research discovery, through new smart software. In order to better harness all of these resources and improve research outcomes, the university recently established a project to investigate how it might better organise the support of eResearch. This presentation will outline the project outcomes, which include a flexible and sustainable eResearch support service model addressing short and longer term research needs, identification of resource requirements required to establish and sustain the service, and the development of research data management policies and implementation plans.
Resumo:
Transnational mergers are mergers involving firms operating in more than one jurisdiction, or which occur in one jurisdiction but have an impact on competition in another. Being of this nature, they have the potential to raise competition law concerns in more than one jurisdiction. When they do, the transaction costs of the merger to the firms involved, and the competition law authorities, are likely to increase significantly and, even where the merger is allowed to proceed, delays are likely to occur in reaping the benefits of the merger. Ultimately, these costs are borne by consumers. This thesis will identify the nature and source of regulatory costs associated with transnational merger review and identify and evaluate possible mechanisms by which these costs might be reduced. It will conclude that there is no single panacea for transnational merger regulation, but that a multi-faceted approach, including the adoption of common filing forms, agreement on filing and review deadlines and continuing efforts toward increasing international cooperation in merger enforcement, is needed to reduce regulatory costs and more successfully improve the welfare outcomes to which merger regulation is directed.