950 resultados para Scaling sStrategies
Resumo:
The Pax Americana and the grand strategy of hegemony (or “Primacy”) that underpins it may be becoming unsustainable. Particularly in the wake of exhausting wars, the Global Financial Crisis, and the shift of wealth from West to East, it may no longer be possible or prudent for the United States to act as the unipolar sheriff or guardian of a world order. But how viable are the alternatives, and what difficulties will these alternatives entail in their design and execution? This analysis offers a sympathetic but critical analysis of alternative U.S. National Security Strategies of “retrenchment” that critics of American diplomacy offer. In these strategies, the United States would anticipate the coming of a more multipolar world and organize its behavior around the dual principles of “concert” and “balance,” seeking a collaborative relationship with other great powers, while being prepared to counterbalance any hostile aggressor that threatens world order. The proponents of such strategies argue that by scaling back its global military presence and its commitments, the United States can trade prestige for security, shift burdens, and attain a more free hand. To support this theory, they often look to the 19th-century concert of Europe as a model of a successful security regime and to general theories about the natural balancing behavior of states. This monograph examines this precedent and measures its usefulness for contemporary statecraft to identify how great power concerts are sustained and how they break down. The project also applies competing theories to how states might behave if world politics are in transition: Will they balance, bandwagon, or hedge? This demonstrates the multiple possible futures that could shape and be shaped by a new strategy. viii A new strategy based on an acceptance of multipolarity and the limits of power is prudent. There is scope for such a shift. The convergence of several trends—including transnational problems needing collaborative efforts, the military advantages of defenders, the reluctance of states to engage in unbridled competition, and hegemony fatigue among the American people—means that an opportunity exists internationally and at home for a shift to a new strategy. But a Concert-Balance strategy will still need to deal with several potential dilemmas. These include the difficulty of reconciling competitive balancing with cooperative concerts, the limits of balancing without a forward-reaching onshore military capability, possible unanticipated consequences such as a rise in regional power competition or the emergence of blocs (such as a Chinese East Asia or an Iranian Gulf), and the challenge of sustaining domestic political support for a strategy that voluntarily abdicates world leadership. These difficulties can be mitigated, but they must be met with pragmatic and gradual implementation as well as elegant theorizing and the need to avoid swapping one ironclad, doctrinaire grand strategy for another.
Resumo:
The complete details of our calculation of the NLO QCD corrections to heavy flavor photo- and hadroproduction with longitudinally polarized initial states are presented. The main motivation for investigating these processes is the determination of the polarized gluon density at the COMPASS and RHIC experiments, respectively, in the near future. All methods used in the computation are extensively documented, providing a self-contained introduction to this type of calculations. Some employed tools also may be of general interest, e.g., the series expansion of hypergeometric functions. The relevant parton level results are collected and plotted in the form of scaling functions. However, the simplification of the obtained gluon-gluon virtual contributions has not been completed yet. Thus NLO phenomenological predictions are only given in the case of photoproduction. The theoretical uncertainties of these predictions, in particular with respect to the heavy quark mass, are carefully considered. Also it is shown that transverse momentum cuts can considerably enhance the measured production asymmetries. Finally unpolarized heavy quark production is reviewed in order to derive conditions for a successful interpretation of future spin-dependent experimental data.
Resumo:
We report numerical results from a study of balance dynamics using a simple model of atmospheric motion that is designed to help address the question of why balance dynamics is so stable. The non-autonomous Hamiltonian model has a chaotic slow degree of freedom (representing vortical modes) coupled to one or two linear fast oscillators (representing inertia-gravity waves). The system is said to be balanced when the fast and slow degrees of freedom are separated. We find adiabatic invariants that drift slowly in time. This drift is consistent with a random-walk behaviour at a speed which qualitatively scales, even for modest time scale separations, as the upper bound given by Neishtadt’s and Nekhoroshev’s theorems. Moreover, a similar type of scaling is observed for solutions obtained using a singular perturbation (‘slaving’) technique in resonant cases where Nekhoroshev’s theorem does not apply. We present evidence that the smaller Lyapunov exponents of the system scale exponentially as well. The results suggest that the observed stability of nearly-slow motion is a consequence of the approximate adiabatic invariance of the fast motion.
Resumo:
Recent aircraft measurements, primarily in the extratropics, of the horizontal variance of nitrous oxide (N2O) and ozone (O3) in the middle stratosphere indicate that horizontal spectra of the tracer variance scale nearly as k−2, where k is the spatial wavenumber along the aircraft flight track [Strahan and Mahlman, 1994; Bacmeister et al., 1996]. This spectral scaling has been regarded as inconsistent with the accepted picture of stratospheric tracer motion; large-scale quasi-two-dimensional tracer advection typically yields a k−1 scaling (i.e., the classical Batchelor spectrum). In this paper it is argued that the nearly k−2 scaling seen in the measurements is a natural outcome of quasi-two-dimensional filamentation of the polar vortex edge. The accepted picture of stratospheric tracer motion can thus be retained: no additional physical processes are needed to account for deviations from the Batchelor spectrum. Our argument is based on the finite lifetime of tracer filaments and on the “singularity spectrum” associated with a one-dimensional field composed of randomly spaced jumps in concentration.
Resumo:
A reduced dynamical model is derived which describes the interaction of weak inertia–gravity waves with nonlinear vortical motion in the context of rotating shallow–water flow. The formal scaling assumptions are (i) that there is a separation in timescales between the vortical motion and the inertia–gravity waves, and (ii) that the divergence is weak compared to the vorticity. The model is Hamiltonian, and possesses conservation laws analogous to those in the shallow–water equations. Unlike the shallow–water equations, the energy invariant is quadratic. Nonlinear stability theorems are derived for this system, and its linear eigenvalue properties are investigated in the context of some simple basic flows.
Resumo:
The concept of slow vortical dynamics and its role in theoretical understanding is central to geophysical fluid dynamics. It leads, for example, to “potential vorticity thinking” (Hoskins et al. 1985). Mathematically, one imagines an invariant manifold within the phase space of solutions, called the slow manifold (Leith 1980; Lorenz 1980), to which the dynamics are constrained. Whether this slow manifold truly exists has been a major subject of inquiry over the past 20 years. It has become clear that an exact slow manifold is an exceptional case, restricted to steady or perhaps temporally periodic flows (Warn 1997). Thus the concept of a “fuzzy slow manifold” (Warn and Ménard 1986) has been suggested. The idea is that nearly slow dynamics will occur in a stochastic layer about the putative slow manifold. The natural question then is, how thick is this layer? In a recent paper, Ford et al. (2000) argue that Lighthill emission—the spontaneous emission of freely propagating acoustic waves by unsteady vortical flows—is applicable to the problem of balance, with the Mach number Ma replaced by the Froude number F, and that it is a fundamental mechanism for this fuzziness. They consider the rotating shallow-water equations and find emission of inertia–gravity waves at O(F2). This is rather surprising at first sight, because several studies of balanced dynamics with the rotating shallow-water equations have gone beyond second order in F, and found only an exponentially small unbalanced component (Warn and Ménard 1986; Lorenz and Krishnamurthy 1987; Bokhove and Shepherd 1996; Wirosoetisno and Shepherd 2000). We have no technical objection to the analysis of Ford et al. (2000), but wish to point out that it depends crucially on R 1, where R is the Rossby number. This condition requires the ratio of the characteristic length scale of the flow L to the Rossby deformation radius LR to go to zero in the limit F → 0. This is the low Froude number scaling of Charney (1963), which, while originally designed for the Tropics, has been argued to be also relevant to mesoscale dynamics (Riley et al. 1981). If L/LR is fixed, however, then F → 0 implies R → 0, which is the standard quasigeostrophic scaling of Charney (1948; see, e.g., Pedlosky 1987). In this limit there is reason to expect the fuzziness of the slow manifold to be “exponentially thin,” and balance to be much more accurate than is consistent with (algebraic) Lighthill emission.
Resumo:
An eddy-resolving numerical model of a zonal flow, meant to resemble the Antarctic Circumpolar Current, is described and analyzed using the framework of J. Marshall and T. Radko. In addition to wind and buoyancy forcing at the surface, the model contains a sponge layer at the northern boundary that permits a residual meridional overturning circulation (MOC) to exist at depth. The strength of the residual MOC is diagnosed for different strengths of surface wind stress. It is found that the eddy circulation largely compensates for the changes in Ekman circulation. The extent of the compensation and thus the sensitivity of the MOC to the winds depend on the surface boundary condition. A fixed-heat-flux surface boundary severely limits the ability of the MOC to change. An interactive heat flux leads to greater sensitivity. To explain the MOC sensitivity to the wind strength under the interactive heat flux, transformed Eulerian-mean theory is applied, in which the eddy diffusivity plays a central role in determining the eddy response. A scaling theory for the eddy diffusivity, based on the mechanical energy balance, is developed and tested; the average magnitude of the diffusivity is found to be proportional to the square root of the wind stress. The MOC sensitivity to the winds based on this scaling is compared with the true sensitivity diagnosed from the experiments.
Resumo:
Geophysical fluid models often support both fast and slow motions. As the dynamics are often dominated by the slow motions, it is desirable to filter out the fast motions by constructing balance models. An example is the quasi geostrophic (QG) model, which is used widely in meteorology and oceanography for theoretical studies, in addition to practical applications such as model initialization and data assimilation. Although the QG model works quite well in the mid-latitudes, its usefulness diminishes as one approaches the equator. Thus far, attempts to derive similar balance models for the tropics have not been entirely successful as the models generally filter out Kelvin waves, which contribute significantly to tropical low-frequency variability. There is much theoretical interest in the dynamics of planetary-scale Kelvin waves, especially for atmospheric and oceanic data assimilation where observations are generally only of the mass field and thus do not constrain the wind field without some kind of diagnostic balance relation. As a result, estimates of Kelvin wave amplitudes can be poor. Our goal is to find a balance model that includes Kelvin waves for planetary-scale motions. Using asymptotic methods, we derive a balance model for the weakly nonlinear equatorial shallow-water equations. Specifically we adopt the ‘slaving’ method proposed by Warn et al. (Q. J. R. Meteorol. Soc., vol. 121, 1995, pp. 723–739), which avoids secular terms in the expansion and thus can in principle be carried out to any order. Different from previous approaches, our expansion is based on a long-wave scaling and the slow dynamics is described using the height field instead of potential vorticity. The leading-order model is equivalent to the truncated long-wave model considered previously (e.g. Heckley & Gill, Q. J. R. Meteorol. Soc., vol. 110, 1984, pp. 203–217), which retains Kelvin waves in addition to equatorial Rossby waves. Our method allows for the derivation of higher-order models which significantly improve the representation of Rossby waves in the isotropic limit. In addition, the ‘slaving’ method is applicable even when the weakly nonlinear assumption is relaxed, and the resulting nonlinear model encompasses the weakly nonlinear model. We also demonstrate that the method can be applied to more realistic stratified models, such as the Boussinesq model.
Resumo:
THE clinical skills of medical professionals rely strongly on the sense of touch, combined with anatomical and diagnostic knowledge. Haptic exploratory procedures allow the expert to detect anomalies via gross and fine palpation, squeezing, and contour following. Haptic feedback is also key to medical interventions, for example when an anaesthetist inserts an epidural needle, a surgeon makes an incision, a dental surgeon drills into a carious lesion, or a veterinarian sutures a wound. Yet, current trends in medical technology and training methods involve less haptic feedback to clinicians and trainees. For example, minimally invasive surgery removes the direct contact between the patient and clinician that gives rise to natural haptic feedback, and furthermore introduces scaling and rotational transforms that confuse the relationship between movements of the hand and the surgical site. Similarly, it is thought that computer-based medical simulation and training systems require high-resolution and realistic haptic feedback to the trainee for significant training transfer to occur. The science and technology of haptics thus has great potential to affect the performance of medical procedures and learning of clinical skills. This special section is about understanding
Resumo:
An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
We extend recent work that included the effect of pressure forces to derive the precession rate of eccentric accretion discs in cataclysmic variables to the case of double degenerate systems. We find that the logical scaling of the pressure force in such systems results in predictions of unrealistically high primary masses. Using the prototype AM CVn as a calibrator for the magnitude of the effect, we find that there is no scaling that applies consistently to all the systems in the class. We discuss the reasons for the lack of a superhump period to mass ratio relationship analogous to that known for SU UMa systems and suggest that this is because these secondaries do not have a single valued mass-radius relationship. We highlight the unreliability of mass-ratios derived by applying the SU UMa expression to the AM CVn binaries.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.
Resumo:
To optimise the placement of small wind turbines in urban areas a detailed understanding of the spatial variability of the wind resource is required. At present, due to a lack of observations, the NOABL wind speed database is frequently used to estimate the wind resource at a potential site. However, recent work has shown that this tends to overestimate the wind speed in urban areas. This paper suggests a method for adjusting the predictions of the NOABL in urban areas by considering the impact of the underlying surface on a neighbourhood scale. In which, the nature of the surface is characterised on a 1 km2 resolution using an urban morphology database. The model was then used to estimate the variability of the annual mean wind speed across Greater London at a height typical of current small wind turbine installations. Initial validation of the results suggests that the predicted wind speeds are considerably more accurate than the NOABL values. The derived wind map therefore currently provides the best opportunity to identify the neighbourhoods in Greater London at which small wind turbines yield their highest energy production. The model does not consider street scale processes, however previously derived scaling factors can be applied to relate the neighbourhood wind speed to a value at a specific rooftop site. The results showed that the wind speed predicted across London is relatively low, exceeding 4 ms-1 at only 27% of the neighbourhoods in the city. Of these sites less than 10% are within 10 km of the city centre, with the majority over 20 km from the city centre. Consequently, it is predicted that small wind turbines tend to perform better towards the outskirts of the city, therefore for cities which fit the Burgess concentric ring model, such as Greater London, ‘distance from city centre’ is a useful parameter for siting small wind turbines. However, there are a number of neighbourhoods close to the city centre at which the wind speed is relatively high and these sites can only been identified with a detailed representation of the urban surface, such as that developed in this study.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.