122 resultados para Constraint qualifications
Resumo:
We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.
Resumo:
The Allied bombing of France between 1940 and 1945 has received comparatively little attention from historians, although the civilian death toll, at about 60,000, was comparable to that of German raids on the UK. This article considers how Allied, and particularly British, bombing policy towards France was developed, what its objectives were and how French concerns about attacks on their territory were (or were not) addressed. It argues that while British policymakers were sensitive to the delicate political implications of attacking France, perceived military necessities tended to trump political misgivings; that Vichy, before November 1942, was a stronger constraint on Allied bombing than the Free French at any time and that the bombing programme largely escaped political control from May 1944.
Resumo:
Successful quantitative precipitation forecasts under convectively unstable conditions depend on the ability of the model to capture the location, timing and intensity of convection. Ensemble forecasts of two mesoscale convective outbreaks over the UK are examined with a view to understanding the nature and extent of their predictability. In addition to a control forecast, twelve ensemble members are run for each case with the same boundary conditions but with perturbations added to the boundary layer. The intention is to introduce perturbations of appropriate magnitude and scale so that the large-scale behaviour of the simulations is not changed. In one case, convection is in statistical equilibrium with the large-scale flow. This places a constraint on the total precipitation, but the location and intensity of individual storms varied. In contrast, the other case was characterised by a large-scale capping inversion. As a result, the location of individual storms was fixed, but their intensities and the total precipitation varied strongly. The ensemble shows case-to-case variability in the nature of predictability of convection in a mesoscale model, and provides additional useful information for quantitative precipitation forecasting.
Resumo:
During the 20th century, solar activity increased in magnitude to a so-called grand maximum. It is probable that this high level of solar activity is at or near its end. It is of great interest whether any future reduction in solar activity could have a significant impact on climate that could partially offset the projected anthropogenic warming. Observations and reconstructions of solar activity over the last 9000 years are used as a constraint on possible future variations to produce probability distributions of total solar irradiance over the next 100 years. Using this information, with a simple climate model, we present results of the potential implications for future projections of climate on decadal to multidecadal timescales. Using one of the most recent reconstructions of historic total solar irradiance, the likely reduction in the warming by 2100 is found to be between 0.06 and 0.1 K, a very small fraction of the projected anthropogenic warming. However, if past total solar irradiance variations are larger and climate models substantially underestimate the response to solar variations, then there is a potential for a reduction in solar activity to mitigate a small proportion of the future warming, a scenario we cannot totally rule out. While the Sun is not expected to provide substantial delays in the time to reach critical temperature thresholds, any small delays it might provide are likely to be greater for lower anthropogenic emissions scenarios than for higher-emissions scenarios.
Resumo:
Accurate observations of cloud microphysical properties are needed for evaluating and improving the representation of cloud processes in climate models and better estimate of the Earth radiative budget. However, large differences are found in current cloud products retrieved from ground-based remote sensing measurements using various retrieval algorithms. Understanding the differences is an important step to address uncertainties in the cloud retrievals. In this study, an in-depth analysis of nine existing ground-based cloud retrievals using ARM remote sensing measurements is carried out. We place emphasis on boundary layer overcast clouds and high level ice clouds, which are the focus of many current retrieval development efforts due to their radiative importance and relatively simple structure. Large systematic discrepancies in cloud microphysical properties are found in these two types of clouds among the nine cloud retrieval products, particularly for the cloud liquid and ice particle effective radius. Note that the differences among some retrieval products are even larger than the prescribed uncertainties reported by the retrieval algorithm developers. It is shown that most of these large differences have their roots in the retrieval theoretical bases, assumptions, as well as input and constraint parameters. This study suggests the need to further validate current retrieval theories and assumptions and even the development of new retrieval algorithms with more observations under different cloud regimes.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
Grassland restoration is the dominant activity funded by agri-environment schemes (AES). However, the re-instatement of biodiversity and ecosystem services is limited by a number of severe abiotic and biotic constraints resulting from previous agricultural management. These appear to be less severe on ex-arable sites compared with permanent grassland. We report findings of a large research programme into practical solutions to these constraints. The key abiotic constraint was high residual soil fertility, particularly phosphorus. This can most easily be addressed by targeting of sites of low nutrient status. The chief biotic constraints were lack of propagules of desirable species and suitable sites for their establishment. Addition of seed mixtures or green hay to gaps created by either mechanical disturbance or herbicide was the most effective means of overcoming these factors. Finally, manipulation of biotic interactions, including hemiparasitic plants to reduce competition from grasses and control of mollusc herbivory of sown species, significantly improved the effectiveness of these techniques.
Resumo:
The rapid-distortion model of Hunt & Graham (1978) for the initial distortion of turbulence by a flat boundary is extended to account fully for viscous processes. Two types of boundary are considered: a solid wall and a free surface. The model is shown to be formally valid provided two conditions are satisfied. The first condition is that time is short compared with the decorrelation time of the energy-containing eddies, so that nonlinear processes can be neglected. The second condition is that the viscous layer near the boundary, where tangential motions adjust to the boundary condition, is thin compared with the scales of the smallest eddies. The viscous layer can then be treated using thin-boundary-layer methods. Given these conditions, the distorted turbulence near the boundary is related to the undistorted turbulence, and thence profiles of turbulence dissipation rate near the two types of boundary are calculated and shown to agree extremely well with profiles obtained by Perot & Moin (1993) by direct numerical simulation. The dissipation rates are higher near a solid wall than in the bulk of the flow because the no-slip boundary condition leads to large velocity gradients across the viscous layer. In contrast, the weaker constraint of no stress at a free surface leads to the dissipation rate close to a free surface actually being smaller than in the bulk of the flow. This explains why tangential velocity fluctuations parallel to a free surface are so large. In addition we show that it is the adjustment of the large energy-containing eddies across the viscous layer that controls the dissipation rate, which explains why rapid-distortion theory can give quantitatively accurate values for the dissipation rate. We also find that the dissipation rate obtained from the model evaluated at the time when the model is expected to fail actually yields useful estimates of the dissipation obtained from the direct numerical simulation at times when the nonlinear processes are significant. We conclude that the main role of nonlinear processes is to arrest growth by linear processes of the viscous layer after about one large-eddy turnover time.
Resumo:
Global efforts to mitigate climate change are guided by projections of future temperatures1. But the eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain1, 2, 3, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming4, 5, 6, 7, 8. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions9, 10, 11. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide-induced warming of 2 °C above pre-industrial temperatures, with a 5–95% confidence interval of 1.3–3.9 °C.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
If an export subsidy is efficient, that is, has a surplus-transfer role, then there exists an implicit function relating the optimal level of the subsidy to the income target in the agricultural sector. If an export subsidy is inefficient no such function exists. We show that dependence exists in large-export equilibrium, not in small-export equilibrium and show that these results remain robust to concerns about domestic tax distortions. The failure of previous work to produce this result stems from its neglect of the income constraint on producer surplus in the programming problem transferring surplusfrom consumersand taxpayers to farmers.
Resumo:
English teachers in England have experienced a lengthy period of external constraint, increasingly controlling their practice. This constraint was originated in the 1989 National curriculum. Although in its first version it was in harmony with practice, its numerous revisions have moved it a long way from teachers’ own values and beliefs. This move is illustrated through research into the teaching of literature, which is seen by English teachers as often arid and driven by examinations alone. This period has been increasingly dominated by high-stakes testing, school league tables and frequent school inspections. Another powerful element has been the introduction of Standards for teachers at every career level from student teachers to the Advanced Skills Teachers. Research demonstrates that this introduction of Standards has had some beneficial effects. However, research also shows that the government decision to replace all these, hierarchically structured standards, with a single standard is seen by many teachers as a retrograde step. Evidence from Advanced Skills Teachers of English shows that the government’s additional proposal to bring in a Master Teacher standard is equally problematic. The decline of the National Association for the Teaching of English, the key subject association for English teachers, is discussed in relation to this increasingly negative and constraining environment, concluding that many English teachers are choosing a form of local resistance which, while understandable, weakens the credibility of the profession and erodes the influence of its key voice, NATE.
Resumo:
From 2001, the construction of flats and high-density developments increased in England and the building of houses declined. Does this indicate a change in taste or is it a result of government planning policies? In this paper, an analysis is made of the long-term effects of the policy of constraint which has existed for the past 50 years but the increase in density is identified as occurring primarily after new, revised, planning guidance was issued in England in 2000 which discouraged low-density development. To substantiate this, it is pointed out that the change which occurred in England did not occur in Scotland where guidance was not changed to encourage high-density residential development. The conclusion that the change is the result of planning policies and not of a change in taste is confirmed by surveys of the occupants of new high-rise developments in Leeds. The new flat-dwellers were predominantly young and childless and expressed the intention, in the near future, when they could, of moving out of the city centre and into houses. From recent changes in guidance by the new coalition government, it is expected that the construction of flats in England will fall back to earlier levels over the next few years.
Resumo:
We present a new technique for correcting errors in radar estimates of rainfall due to attenuation which is based on the fact that any attenuating target will itself emit, and that this emission can be detected by the increased noise level in the radar receiver. The technique is being installed on the UK operational network, and for the first time, allows radome attenuation to be monitored using the increased noise at the higher beam elevations. This attenuation has a large azimuthal dependence but for an old radome can be up to 4 dB for rainfall rates of just 2–4 mm/h. This effect has been neglected in the past, but may be responsible for significant errors in rainfall estimates and in radar calibrations using gauges. The extra noise at low radar elevations provides an estimate of the total path integrated attenuation of nearby storms; this total attenuation can then be used as a constraint for gate-by-gate or polarimetric correction algorithms.
Resumo:
We explored the impact of a degraded semantic system on lexical, morphological and syntactic complexity in language production. We analysed transcripts from connected speech samples from eight patients with semantic dementia (SD) and eight age-matched healthy speakers. The frequency distributions of nouns and verbs were compared for hand-scored data and data extracted using text-analysis software. Lexical measures showed the predicted pattern for nouns and verbs in hand-scored data, and for nouns in software-extracted data, with fewer low frequency items in the speech of the patients relative to controls. The distribution of complex morpho-syntactic forms for the SD group showed a reduced range, with fewer constructions that required multiple auxiliaries and inflections. Finally, the distribution of syntactic constructions also differed between groups, with a pattern that reflects the patients’ characteristic anomia and constraints on morpho-syntactic complexity. The data are in line with previous findings of an absence of gross syntactic errors or violations in SD speech. Alterations in the distributions of morphology and syntax, however, support constraint satisfaction models of speech production in which there is no hard boundary between lexical retrieval and grammatical encoding.