6 resultados para Mergers and acquisitions, analysts, consensus forecast error

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation explores the complex interactions between organizational structure and the environment. In Chapter 1, I investigate the effect of financial development on the formation of European corporate groups. Since cross-country regressions are hard to interpret in a causal sense, we exploit exogenous industry measures to investigate a specific channel through which financial development may affect group affiliation: internal capital markets. Using a comprehensive firm-level dataset on European corporate groups in 15 countries, we find that countries

with less developed financial markets have a higher percentage of group affiliates in more capital intensive industries. This relationship is more pronounced for young and small firms and for affiliates of large and diversified groups. Our findings are consistent with the view that internal capital markets may, under some conditions, be more efficient than prevailing external markets, and that this may drive group affiliation even in developed economies. In Chapter 2, I bridge current streams of innovation research to explore the interplay between R&D, external knowledge, and organizational structure–three elements of a firm’s innovation strategy which we argue should logically be studied together. Using within-firm patent assignment patterns,

we develop a novel measure of structure for a large sample of American firms. We find that centralized firms invest more in research and patent more per R&D dollar than decentralized firms. Both types access technology via mergers and acquisitions, but their acquisitions differ in terms of frequency, size, and i\ntegration. Consistent with our framework, their sources of value creation differ: while centralized firms derive more value from internal R&D, decentralized firms rely more on external knowledge. We discuss how these findings should stimulate more integrative work on theories of innovation. In Chapter 3, I use novel data on 1,265 newly-public firms to show that innovative firms exposed to environments with lower M&A activity just after their initial public offering (IPO) adapt by engaging in fewer technological acquisitions and

more internal research. However, this adaptive response becomes inertial shortly after IPO and persists well into maturity. This study advances our understanding of how the environment shapes heterogeneity and capabilities through its impact on firm structure. I discuss how my results can help bridge inertial versus adaptive perspectives in the study of organizations, by

documenting an instance when the two interact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-molecule sequencing instruments can generate multikilobase sequences with the potential to greatly improve genome and transcriptome assembly. However, the error rates of single-molecule reads are high, which has limited their use thus far to resequencing bacteria. To address this limitation, we introduce a correction algorithm and assembly strategy that uses short, high-fidelity sequences to correct the error in single-molecule sequences. We demonstrate the utility of this approach on reads generated by a PacBio RS instrument from phage, prokaryotic and eukaryotic whole genomes, including the previously unsequenced genome of the parrot Melopsittacus undulatus, as well as for RNA-Seq reads of the corn (Zea mays) transcriptome. Our long-read correction achieves >99.9% base-call accuracy, leading to substantially better assemblies than current sequencing strategies: in the best example, the median contig size was quintupled relative to high-coverage, second-generation assemblies. Greater gains are predicted if read lengths continue to increase, including the prospect of single-contig bacterial chromosome assembly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user. © 2010 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2012 by Oxford University Press. All rights reserved.This article considers the determinants and effects of M&As in the pharmaceutical industry, with a particular focus on innovation and R&D productivity. As is the case in other industries, mergers in the pharmaceutical field are driven by a variety of company motives and conditions. These include defensive responses to industry shocks as well as more proactive rationales, such as economies of scale and scope, access to new technologies, and expansion to new markets. It is important to take account of firms' characteristics and motivations in evaluating merger performance, rather than using a broad aggregate brushstroke. Research to date on pharmaceuticals suggests considerable variation in both motivation and outcomes. From an antitrust policy standpoint, the larger horizontal mergers in pharmaceuticals have run into few challenges from regulatory authorities in the United States and the European Union, given the option to spin off competing therapeutic products to other drug firms.