879 resultados para Egocentric Constraint


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models which define fitness in terms of per capita rate of increase of phenotypes are used to analyse patterns of individual growth. It is shown that sigmoid growth curves are an optimal strategy (i.e. maximize fitness) if (Assumption 1a) mortality decreases with body size; (2a) mortality is a convex function of specific growth rate, viewed from above; (3) there is a constraint on growth rate, which is attained in the first phase of growth. If the constraint is not attained then size should increase at a progressively reducing rate. These predictions are biologically plausible. Catch-up growth, for retarded individuals, is generally not an optimal strategy though in special cases (e.g. seasonal breeding) it might be. Growth may be advantageous after first breeding if birth rate is a convex function of G (the fraction of production devoted to growth) viewed from above (Assumption 5a), or if mortality rate is a convex function of G, viewed from above (Assumption 6c). If assumptions 5a and 6c are both false, growth should cease at the age of first reproduction. These predictions could be used to evaluate the incidence of indeterminate versus determinate growth in the animal kingdom though the data currently available do not allow quantitative tests. In animals with invariant adult size a method is given which allows one to calculate whether an increase in body size is favoured given that fecundity and developmental time are thereby increased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For data assimilation in numerical weather prediction, the initial forecast-error covariance matrix Pf is required. For variational assimilation it is particularly important to prescribe an accurate initial matrix Pf, since Pf is either static (in the 3D-Var case) or constant at the beginning of each assimilation window (in the 4D-Var case). At large scales the atmospheric flow is well approximated by hydrostatic balance and this balance is strongly enforced in the initial matrix Pf used in operational variational assimilation systems such as that of the Met Office. However, at convective scales this balance does not necessarily hold any more. Here we examine the extent to which hydrostatic balance is valid in the vertical forecast-error covariances for high-resolution models in order to determine whether there is a need to relax this balance constraint in convective-scale data assimilation. We use the Met Office Global and Regional Ensemble Prediction System (MOGREPS) and a 1.5 km resolution version of the Unified Model for a case study characterized by the presence of convective activity. An ensemble of high-resolution forecasts valid up to three hours after the onset of convection is produced. We show that at 1.5 km resolution hydrostatic balance does not hold for forecast errors in regions of convection. This indicates that in the presence of convection hydrostatic balance should not be enforced in the covariance matrix used for variational data assimilation at this scale. The results show the need to investigate covariance models that may be better suited for convective-scale data assimilation. Finally, we give a measure of the balance present in the forecast perturbations as a function of the horizontal scale (from 3–90 km) using a set of diagnostics. Copyright © 2012 Royal Meteorological Society and British Crown Copyright, the Met Office

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Allied bombing of France between 1940 and 1945 has received comparatively little attention from historians, although the civilian death toll, at about 60,000, was comparable to that of German raids on the UK. This article considers how Allied, and particularly British, bombing policy towards France was developed, what its objectives were and how French concerns about attacks on their territory were (or were not) addressed. It argues that while British policymakers were sensitive to the delicate political implications of attacking France, perceived military necessities tended to trump political misgivings; that Vichy, before November 1942, was a stronger constraint on Allied bombing than the Free French at any time and that the bombing programme largely escaped political control from May 1944.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Successful quantitative precipitation forecasts under convectively unstable conditions depend on the ability of the model to capture the location, timing and intensity of convection. Ensemble forecasts of two mesoscale convective outbreaks over the UK are examined with a view to understanding the nature and extent of their predictability. In addition to a control forecast, twelve ensemble members are run for each case with the same boundary conditions but with perturbations added to the boundary layer. The intention is to introduce perturbations of appropriate magnitude and scale so that the large-scale behaviour of the simulations is not changed. In one case, convection is in statistical equilibrium with the large-scale flow. This places a constraint on the total precipitation, but the location and intensity of individual storms varied. In contrast, the other case was characterised by a large-scale capping inversion. As a result, the location of individual storms was fixed, but their intensities and the total precipitation varied strongly. The ensemble shows case-to-case variability in the nature of predictability of convection in a mesoscale model, and provides additional useful information for quantitative precipitation forecasting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the 20th century, solar activity increased in magnitude to a so-called grand maximum. It is probable that this high level of solar activity is at or near its end. It is of great interest whether any future reduction in solar activity could have a significant impact on climate that could partially offset the projected anthropogenic warming. Observations and reconstructions of solar activity over the last 9000 years are used as a constraint on possible future variations to produce probability distributions of total solar irradiance over the next 100 years. Using this information, with a simple climate model, we present results of the potential implications for future projections of climate on decadal to multidecadal timescales. Using one of the most recent reconstructions of historic total solar irradiance, the likely reduction in the warming by 2100 is found to be between 0.06 and 0.1 K, a very small fraction of the projected anthropogenic warming. However, if past total solar irradiance variations are larger and climate models substantially underestimate the response to solar variations, then there is a potential for a reduction in solar activity to mitigate a small proportion of the future warming, a scenario we cannot totally rule out. While the Sun is not expected to provide substantial delays in the time to reach critical temperature thresholds, any small delays it might provide are likely to be greater for lower anthropogenic emissions scenarios than for higher-emissions scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate observations of cloud microphysical properties are needed for evaluating and improving the representation of cloud processes in climate models and better estimate of the Earth radiative budget. However, large differences are found in current cloud products retrieved from ground-based remote sensing measurements using various retrieval algorithms. Understanding the differences is an important step to address uncertainties in the cloud retrievals. In this study, an in-depth analysis of nine existing ground-based cloud retrievals using ARM remote sensing measurements is carried out. We place emphasis on boundary layer overcast clouds and high level ice clouds, which are the focus of many current retrieval development efforts due to their radiative importance and relatively simple structure. Large systematic discrepancies in cloud microphysical properties are found in these two types of clouds among the nine cloud retrieval products, particularly for the cloud liquid and ice particle effective radius. Note that the differences among some retrieval products are even larger than the prescribed uncertainties reported by the retrieval algorithm developers. It is shown that most of these large differences have their roots in the retrieval theoretical bases, assumptions, as well as input and constraint parameters. This study suggests the need to further validate current retrieval theories and assumptions and even the development of new retrieval algorithms with more observations under different cloud regimes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Grassland restoration is the dominant activity funded by agri-environment schemes (AES). However, the re-instatement of biodiversity and ecosystem services is limited by a number of severe abiotic and biotic constraints resulting from previous agricultural management. These appear to be less severe on ex-arable sites compared with permanent grassland. We report findings of a large research programme into practical solutions to these constraints. The key abiotic constraint was high residual soil fertility, particularly phosphorus. This can most easily be addressed by targeting of sites of low nutrient status. The chief biotic constraints were lack of propagules of desirable species and suitable sites for their establishment. Addition of seed mixtures or green hay to gaps created by either mechanical disturbance or herbicide was the most effective means of overcoming these factors. Finally, manipulation of biotic interactions, including hemiparasitic plants to reduce competition from grasses and control of mollusc herbivory of sown species, significantly improved the effectiveness of these techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rapid-distortion model of Hunt & Graham (1978) for the initial distortion of turbulence by a flat boundary is extended to account fully for viscous processes. Two types of boundary are considered: a solid wall and a free surface. The model is shown to be formally valid provided two conditions are satisfied. The first condition is that time is short compared with the decorrelation time of the energy-containing eddies, so that nonlinear processes can be neglected. The second condition is that the viscous layer near the boundary, where tangential motions adjust to the boundary condition, is thin compared with the scales of the smallest eddies. The viscous layer can then be treated using thin-boundary-layer methods. Given these conditions, the distorted turbulence near the boundary is related to the undistorted turbulence, and thence profiles of turbulence dissipation rate near the two types of boundary are calculated and shown to agree extremely well with profiles obtained by Perot & Moin (1993) by direct numerical simulation. The dissipation rates are higher near a solid wall than in the bulk of the flow because the no-slip boundary condition leads to large velocity gradients across the viscous layer. In contrast, the weaker constraint of no stress at a free surface leads to the dissipation rate close to a free surface actually being smaller than in the bulk of the flow. This explains why tangential velocity fluctuations parallel to a free surface are so large. In addition we show that it is the adjustment of the large energy-containing eddies across the viscous layer that controls the dissipation rate, which explains why rapid-distortion theory can give quantitatively accurate values for the dissipation rate. We also find that the dissipation rate obtained from the model evaluated at the time when the model is expected to fail actually yields useful estimates of the dissipation obtained from the direct numerical simulation at times when the nonlinear processes are significant. We conclude that the main role of nonlinear processes is to arrest growth by linear processes of the viscous layer after about one large-eddy turnover time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global efforts to mitigate climate change are guided by projections of future temperatures1. But the eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain1, 2, 3, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming4, 5, 6, 7, 8. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions9, 10, 11. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide-induced warming of 2 °C above pre-industrial temperatures, with a 5–95% confidence interval of 1.3–3.9 °C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If an export subsidy is efficient, that is, has a surplus-transfer role, then there exists an implicit function relating the optimal level of the subsidy to the income target in the agricultural sector. If an export subsidy is inefficient no such function exists. We show that dependence exists in large-export equilibrium, not in small-export equilibrium and show that these results remain robust to concerns about domestic tax distortions. The failure of previous work to produce this result stems from its neglect of the income constraint on producer surplus in the programming problem transferring surplusfrom consumersand taxpayers to farmers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

English teachers in England have experienced a lengthy period of external constraint, increasingly controlling their practice. This constraint was originated in the 1989 National curriculum. Although in its first version it was in harmony with practice, its numerous revisions have moved it a long way from teachers’ own values and beliefs. This move is illustrated through research into the teaching of literature, which is seen by English teachers as often arid and driven by examinations alone. This period has been increasingly dominated by high-stakes testing, school league tables and frequent school inspections. Another powerful element has been the introduction of Standards for teachers at every career level from student teachers to the Advanced Skills Teachers. Research demonstrates that this introduction of Standards has had some beneficial effects. However, research also shows that the government decision to replace all these, hierarchically structured standards, with a single standard is seen by many teachers as a retrograde step. Evidence from Advanced Skills Teachers of English shows that the government’s additional proposal to bring in a Master Teacher standard is equally problematic. The decline of the National Association for the Teaching of English, the key subject association for English teachers, is discussed in relation to this increasingly negative and constraining environment, concluding that many English teachers are choosing a form of local resistance which, while understandable, weakens the credibility of the profession and erodes the influence of its key voice, NATE.