963 resultados para Box-constrained optimization
Resumo:
Tax policies that constrain net transfers between the farm sector and the fisc are modeled under price uncertainty. Increasing the level of tax on profits causes the firm to expand output. Implications are derived for supply control and the distributions of profits and net receipts at the fisc.
Resumo:
Duchenne muscular dystrophy is a fatal muscle-wasting disorder. Lack of dystrophin compromises the integrity of the sarcolemma and results in myofibers that are highly prone to contraction-induced injury. Recombinant adenoassociated virus (rAAV)-mediated dystrophin gene transfer strategies to muscle for the treatment of Duchenne muscular dystrophy (DMD) have been limited by the small cloning capacity of rAAV vectors and high titers necessary to achieve efficient systemic gene transfer. In this study, we assess the impact of codon optimization on microdystrophin (ΔAB/R3-R18/ΔCT) expression and function in the mdx mouse and compare the function of two different configurations of codon-optimized microdystrophin genes (ΔAB/R3-R18/ΔCT and ΔR4-R23/ΔCT) under the control of a muscle-restrictive promoter (Spc5-12). Codon optimization of microdystrophin significantly increases levels of microdystrophin mRNA and protein after intramuscular and systemic administration of plasmid DNA or rAAV2/8. Physiological assessment demonstrates that codon optimization of ΔAB/R3-R18/ΔCT results in significant improvement in specific force, but does not improve resistance to eccentric contractions compared with noncodon-optimized ΔAB/ R3-R18/ΔCT. However, codon-optimized microdystrophin ΔR4-R23/ΔCT completely restored specific force generation and provided substantial protection from contraction-induced injury. These results demonstrate that codon optimization of microdystrophin under the control of a muscle-specific promoter can significantly improve expression levels such that reduced titers of rAAV vectors will be required for efficient systemic administration.
Resumo:
As climate changes, temperatures will play an increasing role in determining crop yield. Both climate model error and lack of constrained physiological thresholds limit the predictability of yield. We used a perturbed-parameter climate model ensemble with two methods of bias-correction as input to a regional-scale wheat simulation model over India to examine future yields. This model configuration accounted for uncertainty in climate, planting date, optimization, temperature-induced changes in development rate and reproduction. It also accounts for lethal temperatures, which have been somewhat neglected to date. Using uncertainty decomposition, we found that fractional uncertainty due to temperature-driven processes in the crop model was on average larger than climate model uncertainty (0.56 versus 0.44), and that the crop model uncertainty is dominated by crop development. Simulations with the raw compared to the bias-corrected climate data did not agree on the impact on future wheat yield, nor its geographical distribution. However the method of bias-correction was not an important source of uncertainty. We conclude that bias-correction of climate model data and improved constraints on especially crop development are critical for robust impact predictions.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Since the Dearing Report .1 there has been an increased emphasis on the development of employability and transferable (‘soft’) skills in undergraduate programmes. Within STEM subject areas, recent reports concluded that universities should offer ‘greater and more sustainable variety in modes of study to meet the changing demands of industry and students’.2 At the same time, higher education (HE) institutions are increasingly conscious of the sensitivity of league table positions on employment statistics and graduate destinations. Modules that are either credit or non-credit bearing are finding their way into the core curriculum at HE. While the UK government and other educational bodies argue the way forward over A-level reform, universities must also meet the needs of their first year cohorts in terms of the secondary to tertiary transition and developing independence in learning.
Resumo:
During the last termination (from ~18 000 years ago to ~9000 years ago), the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a coupled climate-carbon model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependent diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 years ago, when the Antarctic ice sheet extent was at its maximum. In this scenario, we make the hypothesis that sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario, it is possible to simulate both the amplitude and timing of the long-term CO2 increase during the last termination in agreement with ice core data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
Over the last decade issues related to the financial viability of development have become increasingly important to the English planning system. As part of a wider shift towards the compartmentalisation of planning tasks, expert consultants are required to quantify, in an attempt to rationalise, planning decisions in terms of economic ‘viability’. Often with a particular focus on planning obligations, the results of development viability modelling have emerged as a key part of the evidence base used in site-specific negotiations and in planning policy formation. Focussing on the role of clients and other stakeholders, this paper investigates how development viability is tested in practice. It draws together literature on the role of calculative practices in policy formation, client feedback and influence in real estate appraisals and stakeholder engagement and consultation in the planning literature to critically evaluate the role of clients and other interest groups in influencing the production and use of development viability appraisal models. The paper draws upon semi-structured interviews with the main producers of development viability appraisals to conclude that, whilst appraisals have the potential to be biased by client and stakeholder interests, there are important controlling influences on potential opportunistic behaviour. One such control is local authorities’ weak understanding of development viability appraisal techniques which limits their capacity to question the outputs of appraisal models. However, this also is of concern given that viability is now a central feature of the town planning system.
Resumo:
On-going human population growth and changing patterns of resource consumption are increasing global demand for ecosystem services, many of which are provided by soils. Some of these ecosystem services are linearly related to the surface area of pervious soil, whereas others show non-linear relationships, making ecosystem service optimization a complex task. As limited land availability creates conflicting demands among various types of land use, a central challenge is how to weigh these conflicting interests and how to achieve the best solutions possible from a perspective of sustainable societal development. These conflicting interests become most apparent in soils that are the most heavily used by humans for specific purposes: urban soils used for green spaces, housing, and other infrastructure and agricultural soils for producing food, fibres and biofuels. We argue that, despite their seemingly divergent uses of land, agricultural and urban soils share common features with regards to interactions between ecosystem services, and that the trade-offs associated with decision-making, while scale- and context-dependent, can be surprisingly similar between the two systems. We propose that the trade-offs within land use types and their soil-related ecosystems services are often disproportional, and quantifying these will enable ecologists and soil scientists to help policy makers optimizing management decisions when confronted with demands for multiple services under limited land availability.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.