948 resultados para Merger


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous empirical assessments of the effectiveness of structural merger remedies have focused mainly on the subsequent viability of the divested assets. Here, we take a different approach by examining how competitive are the market structures which result from the divestments. We employ a tightly specified sample of markets in which the European Commission (EC) has imposed structural merger remedies. It has two key features: (i) it includes all mergers in which the EC appears to have seriously considered, simultaneously, the possibility of collective dominance, as well as single dominance; (ii) in a previous paper, for the same sample, we estimated a model which proved very successful in predicting the Commission’s merger decisions, in terms of the market shares of the leading firms. The former allows us to explore the choices between alternative theories of harm, and the latter provides a yardstick for evaluating whether markets are competitive or not – at least in the eyes of the Commission. Running the hypothetical post-remedy market shares through the model, we can predict whether the EC would have judged the markets concerned to be competitive, had they been the result of a merger rather than a remedy. We find that a significant proportion were not competitive in this sense. One explanation is that the EC has simply been inconsistent – using different criteria for assessing remedies from those for assessing the mergers in the first place. However, a more sympathetic – and in our opinion, more likely – explanation is that the Commission is severely constrained by the pre-merger market structures in many markets. We show that, typically, divestment remedies return the market to the same structure as existed before the proposed merger. Indeed, one can argue that any competition authority should never do more than this. Crucially, however, we find that this pre-merger structure is often itself not competitive. We also observe an analogous picture in a number of markets where the Commission chose not to intervene: while the post-merger structure was not competitive, nor was the pre-merger structure. In those cases, however, the Commission preferred the former to the latter. In effect, in both scenarios, the EC was faced with a no-win decision. This immediately raises a follow-up question: why did the EC intervene for some, but not for others – given that in all these cases, some sort of anticompetitive structure would prevail? We show that, in this sample at least, the answer is often tied to the prospective rank of the merged firm post-merger. In particular, in those markets where the merged firm would not be the largest post-merger, we find a reluctance to intervene even where the resulting market structure is likely to be conducive to collective dominance. We explain this by a willingness to tolerate an outcome which may be conducive to tacit collusion if the alternative is the possibility of an enhanced position of single dominance by the market leader. Finally, because the sample is confined to cases brought under the ‘old’ EC Merger Regulation, we go on to consider how, if at all, these conclusions require qualification following the 2004 revisions, which, amongst other things, made interventions for non-coordinated behaviour possible without requiring that the merged firm be a dominant market leader. Our main conclusions here are that the Commission appears to have been less inclined to intervene in general, but particularly for Collective Dominance (or ‘coordinated effects’ as it is now known in Europe as well as the US.) Moreover, perhaps contrary to expectation, where the merged firm is #2, the Commission has to date rarely made a unilateral effects decision and never made a coordinated effects decision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to identify empirically the implicit structural model, especially the roles of size asymmetries and concentration, used by the European Commission to identify mergers with coordinated effects (i.e. collective dominance). Apart from its obvious policy-relevance, the paper is designed to shed empirical light on the conditions under which tacit collusion is most likely. We construct a database relating to 62 candidate mergers and find that, in the eyes of the Commission, tacit collusion in this context virtually never involves more than two firms and requires close symmetry in the market shares of the two firms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nature of tacitly collusive behaviour often makes coordination unstable, and this may result in periods of breakdown, during which consumers benefit from reduced prices. This is allowed for by adding demand uncertainty to the Compte et al. (2002) model of tacit collusion amongst asymmetric firms. Breakdowns occur when a firm cannot exclude the possibility of a deviation by a rival. It is then possible that an outcome with collusive behaviour, subject to long/frequent break downs, can improve consumer welfare compared to an alternative with sustained unilateral conduct. This is illustrated by re-examining the Nestle/Perrier merger analyzed by Compte et al., but now also taking into account the potential for welfare losses arising from unilateral behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the discussion - Industry Education: The Merger Continues - by Rob Heiman Assistant Professor Hospitality Food Service Management at Kent State University, the author originally declares, “Integrating the process of an on-going catering and banquet function with that of selected behavioral academic objectives leads to an effective, practical course of instruction in catering and banquet management. Through an illustrated model, this article highlights such a merger while addressing a variety of related problems and concerns to the discipline of hospitality food service management education.” The article stresses the importance of blending the theoretical; curriculum based learning process with that of a hands-on approach, in essence combining an in-reality working program, with academics, to develop a well rounded hospitality student. “How many programs are enjoying the luxury of excessive demand for students from industry [?],” the author asks in proxy for, and to highlight the immense need for qualified personnel in the hospitality industry. As the author describes it, “An ideal education program concerns itself with the integration of theory and simulation with hands-on experience to teach the cognitive as well as the technical skills required to achieve the pre-determined hospitality education objectives.” In food service one way to achieve this integrated learning curve is to have the students prepare foods and then consume them. Heiman suggests this will quickly illustrate to students the rights and wrongs of food preparation. Another way is to have students integrating the academic program with feeding the university population. Your author offers more illustrations on similar principles. Heiman takes special care in characterizing the banquet and catering portions of the food service industry, and he offers empirical data to support the descriptions. It is in these areas, banquet and catering, that Heiman says special attention is needed to produce qualified students to those fields. This is the real focus of the discussion, and it is in this venue that the remainder of the article is devoted. “Based on the perception that quality education is aided by implementing project assignments through the course of study in food service education, a model description can be implemented for a course in Catering and Banquet Management and Operations. This project model first considers the prioritized objectives of education and industry and then illustrates the successful merging of resources for mutual benefits,” Heiman sketches. The model referred to above is also the one aforementioned in the thesis statement at the beginning of the article. This model is divided into six major components; Heiman lists and details them. “The model has been tested through two semesters involving 29 students,” says Heiman. “Reaction by all participants has been extremely positive. Recent graduates of this type of program have received a sound theoretical framework and demonstrated their creative interpretation of this theory in practical application,” Heiman says in summation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to identify research trends in Merger and Acquisition waves in the restaurant industry and propose future research directions by thoroughly reviewing existing Merger and Acquisition related literature. Merger and Acquisition has been extensively used as a strategic management tool for fast growth in the restaurant industry. However, there has been a very limited amount of literature that focuses on Merger & Acquisition in the restaurant industry. Particular, no known study has been identified that examined M&A wave and its determinants. A good understanding of determinants of M&A wave will help practitioners identify important factors that should be considered before making M&A decisions and predict the optimal timing for successful M&A transactions. This study examined literature on six U.S M&A waves and their determinants and summarized main explanatory factors examined, statistical methods, and theoretical frameworks. Inclusion of unique macroeconomic factors of the restaurant industry and the use of factor analysis are suggested for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

iPTF14atg, a subluminous peculiar Type Ia supernova (SN Ia) similar to SN 2002es, is the first SN Ia for which a strong UV flash was observed in the early-time light curves. This has been interpreted as evidence for a single-degenerate (SD) progenitor system, where such a signal is expected from interactions between the SN ejecta and the non-degenerate companion star. Here, we compare synthetic observables of multidimensional state-of-the-art explosion models for different progenitor scenarios to the light curves and spectra of iPTF14atg. From our models, we have difficulties explaining the spectral evolution of iPTF14atg within the SD progenitor channel. In contrast, we find that a violent merger of two carbon-oxygen white dwarfs with 0.9 and 0.76 M⊙, respectively, provides an excellent match to the spectral evolution of iPTF14atg from 10 d before to several weeks after maximum light. Our merger model does not naturally explain the initial UV flash of iPTF14atg. We discuss several possibilities like interactions of the SN ejecta with the circumstellar medium and surface radioactivity from an He-ignited merger that may be able to account for the early UV emission in violent merger models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of ƒ (R) gravity theories, we show that the apparent mass of a neutron star as seen from an observer at infinity is numerically calculable but requires careful matching, first at the star’s edge, between interior and exterior solutions, none of them being totally Schwarzschild-like but presenting instead small oscillations of the curvature scalar R; and second at large radii, where the Newtonian potential is used to identify the mass of the neutron star. We find that for the same equation of state, this mass definition is always larger than its general relativistic counterpart. We exemplify this with quadratic R^2 and Hu-Sawicki-like modifications of the standard General Relativity action. Therefore, the finding of two-solar mass neutron stars basically imposes no constraint on stable ƒ (R) theories. However, star radii are in general smaller than in General Relativity, which can give an observational handle on such classes of models at the astrophysical level. Both larger masses and smaller matter radii are due to much of the apparent effective energy residing in the outer metric for scalar-tensor theories. Finally, because the ƒ (R) neutron star masses can be much larger than General Relativity counterparts, the total energy available for radiating gravitational waves could be of order several solar masses, and thus a merger of these stars constitutes an interesting wave source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper generalizes the model of Salant et al. (1983; Quarterly Journal of Economics, Vol. 98, pp. 185–199) to a successive oligopoly model with product differentiation. Upstream firms produce differentiated goods, retailers compete in quantities, and supply contracts are linear. We show that if retailers buy from all producers, downstream mergers do not affect wholesale prices. Our result replicates that of Salant's, where mergers are not profitable unless the size of the merged firm exceeds 80 per cent of the industry. This result is robust to the type of competition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transnational mergers are mergers involving firms operating in more than one jurisdiction, or which occur in one jurisdiction but have an impact on competition in another. Being of this nature, they have the potential to raise competition law concerns in more than one jurisdiction. When they do, the transaction costs of the merger to the firms involved, and the competition law authorities, are likely to increase significantly and, even where the merger is allowed to proceed, delays are likely to occur in reaping the benefits of the merger. Ultimately, these costs are borne by consumers. This thesis will identify the nature and source of regulatory costs associated with transnational merger review and identify and evaluate possible mechanisms by which these costs might be reduced. It will conclude that there is no single panacea for transnational merger regulation, but that a multi-faceted approach, including the adoption of common filing forms, agreement on filing and review deadlines and continuing efforts toward increasing international cooperation in merger enforcement, is needed to reduce regulatory costs and more successfully improve the welfare outcomes to which merger regulation is directed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose–The purpose of this paper is to formulate a conceptual framework for urban sustainability indicators selection. This framework will be used to develop an indicator-based evaluation method for assessing the sustainability levels of residential neighbourhood developments in Malaysia. Design/methodology/approach–We provide a brief overview of existing evaluation frameworks for sustainable development assessment. We then develop a conceptual Sustainable Residential Neighbourhood Assessment (SNA) framework utilising a four-pillar sustainability framework (environmental, social, economic and institutional) and a combination of domain-based and goal-based general frameworks. This merger offers the advantages of both individual frameworks, while also overcoming some of their weaknesses when used to develop the urban sustainability evaluation method for assessing residential neighbourhoods. Originality/value–This approach puts in evidence that many of the existing frameworks for evaluating urban sustainability do not extend their frameworks to include assessing housing sustainability at a local level. Practical implications–It is expected that the use of the indicator-based Sustainable Neighbourhood Assessment framework will present a potential mechanism for planners and developers to evaluate and monitor the sustainability performance of residential neighbourhood developments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Operators of busy contemporary airports have to balance tensions between the timely flow of passengers, flight operations, the conduct of commercial business activities and the effective application of security processes. In addition to specific onsite issues airport operators liaise with a range of organisations which set and enforce aviation-related policies and regulations as well as border security agencies responsible for customs, quarantine and immigration, in addition to first response security services. The challenging demands of coordinating and planning in such complex socio-technical contexts place considerable pressure on airport management to facilitate coordination of what are often conflicting goals and expectations among groups that have standing in respect to safe and secure air travel. What are, as yet, significantly unexplored issues in large airports are options for the optimal coordination of efforts from the range of public and private sector participants active in airport security and crisis management. A further aspect of this issue is how airport management systems operate when there is a transition from business-as-usual into an emergency/crisis situation and then, on recovery, back to ‘normal’ functioning. Business Continuity Planning (BCP), incorporating sub-plans for emergency response, continuation of output and recovery of degraded operating capacity, would fit such a context. The implementation of BCP practices in such a significant high security setting offers considerable potential benefit yet entails considerable challenges. This paper presents early results of a 4 year nationally funded industry-based research project examining the merger of Business Continuity Planning and Transport Security Planning as a means of generating capability for improved security and reliability and, ultimately, enhanced resilience in major airports. The project is part of a larger research program on the Design of Secure Airports that includes most of the gazetted ‘first response’ international airports in Australia, key Aviation industry groups and all aviation-related border and security regulators as collaborative partners. The paper examines a number of initial themes in the research, including: ? Approaches to integrating Business Continuity & Aviation Security Planning within airport operations; ? Assessment of gaps in management protocols and operational capacities for identifying and responding to crises within and across critical aviation infrastructure; ? Identification of convergent and divergent approaches to crisis management used across Austral-Asia and their alignment to planned and possible infrastructure evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The deal value of private equity merger and takeover activity has achieved unprecedented growth in the last couple of years, in Australia and globally. Private equity deals are not a new feature of the market; however, such deals have been subject to increased academic, professional and policy interest. This study examines the particular features of 15 major deals involving listed company "targets" and provides evidence – based on a comparison with a benchmark sample – to demonstrate the role that private equity plays in the market for corporate control. The objective of this study was to assess the friendliness of private equity bids. Based on the indicia compiled, lower bid premiums, the presence of break fees and the intention to retain senior management are compellingly different for private equity bids than for the comparative sample of bids. Using these several characteristics of "friendliness", the authors show that private equity deals are generally friendly in nature, consistent with industry rhetoric, but perhaps inconsistent with the popular belief that private equity bidders are the "barbarians at the gate".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the characteristics and attributes that private equity investors prefer when selecting target acquisitions. These characteristics are examined against a matched sample of firms subject to corporate acquisitions via tender/merger offer during 2000-2009, across seven countries: Australia, Canada, the United Kingdom, the USA, France, Germany and Sweden. We show that firm-specific characteristics are more influential in target selection than external or institutional variables. In particular, private equity targets exhibit lower stock volatility and long-term growth prospects, are larger, and have greater abnormal operating income relative to tender/merger offer target firms. Further, private equity bidders exhibit 'home bias', implying that familiarity motivates target selection. Institutional factors remain largely insignificant across all tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need for strong science, technology and innovation linkages between Higher Education Institutions (HEIs) and industries is a pivotal point for middle-income countries in their endeavor to enhance human capital in socioeconomic development. Currently, the University-Industry partnerships are at an infant stage in Sri Lankan higher education context. Technological maturity and effective communication skills are contributing factors for an efficient graduate profile. Also, expanding internship programs in particular for STEM disciplines provide work experience to students that would strengthen the relevance of higher education programs. This study reports historical overviews and current trends in STEM education in Sri Lanka. Emphasis will be drawn to recent technological and higher education curricular reforms. Data from the last 10 years were extracted from the higher education sector and Ministry of Higher Education Policy portfolios. Associations and trend analysis of the sector growth were compared with STEM existence, merger and predicted augmentations. Results were depicted and summarised based on STEM streams and disciplines. It was observed that the trend of STEM augmentation in the Sri Lankan Higher Education context is growing at a slow but steady pace. Further analysis with other sectors in particular, Industry information, would be useful and a worthwhile exercise.