56 resultados para Box constrained minimization
Resumo:
Since the Dearing Report .1 there has been an increased emphasis on the development of employability and transferable (‘soft’) skills in undergraduate programmes. Within STEM subject areas, recent reports concluded that universities should offer ‘greater and more sustainable variety in modes of study to meet the changing demands of industry and students’.2 At the same time, higher education (HE) institutions are increasingly conscious of the sensitivity of league table positions on employment statistics and graduate destinations. Modules that are either credit or non-credit bearing are finding their way into the core curriculum at HE. While the UK government and other educational bodies argue the way forward over A-level reform, universities must also meet the needs of their first year cohorts in terms of the secondary to tertiary transition and developing independence in learning.
Resumo:
During the last termination (from ~18 000 years ago to ~9000 years ago), the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a coupled climate-carbon model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependent diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 years ago, when the Antarctic ice sheet extent was at its maximum. In this scenario, we make the hypothesis that sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario, it is possible to simulate both the amplitude and timing of the long-term CO2 increase during the last termination in agreement with ice core data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.
Resumo:
Over the last decade issues related to the financial viability of development have become increasingly important to the English planning system. As part of a wider shift towards the compartmentalisation of planning tasks, expert consultants are required to quantify, in an attempt to rationalise, planning decisions in terms of economic ‘viability’. Often with a particular focus on planning obligations, the results of development viability modelling have emerged as a key part of the evidence base used in site-specific negotiations and in planning policy formation. Focussing on the role of clients and other stakeholders, this paper investigates how development viability is tested in practice. It draws together literature on the role of calculative practices in policy formation, client feedback and influence in real estate appraisals and stakeholder engagement and consultation in the planning literature to critically evaluate the role of clients and other interest groups in influencing the production and use of development viability appraisal models. The paper draws upon semi-structured interviews with the main producers of development viability appraisals to conclude that, whilst appraisals have the potential to be biased by client and stakeholder interests, there are important controlling influences on potential opportunistic behaviour. One such control is local authorities’ weak understanding of development viability appraisal techniques which limits their capacity to question the outputs of appraisal models. However, this also is of concern given that viability is now a central feature of the town planning system.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
In recent years several methodologies have been developed to combine and interpret ensembles of climate models with the aim of quantifying uncertainties in climate projections. Constrained climate model forecasts have been generated by combining various choices of metrics used to weight individual ensemble members, with diverse approaches to sampling the ensemble. The forecasts obtained are often significantly different, even when based on the same model output. Therefore, a climate model forecast classification system can serve two roles: to provide a way for forecast producers to self-classify their forecasts; and to provide information on the methodological assumptions underlying the forecast generation and its uncertainty when forecasts are used for impacts studies. In this review we propose a possible classification system based on choices of metrics and sampling strategies. We illustrate the impact of some of the possible choices in the uncertainty quantification of large scale projections of temperature and precipitation changes, and briefly discuss possible connections between climate forecast uncertainty quantification and decision making approaches in the climate change context.
Resumo:
We present results from experimental price-setting oligopolies in which green firms undertake different levels of energy-saving investments motivated by public subsidies and demand-side advantages. We find that consumers reveal higher willingness to pay for greener sellers’ products. This observation in conjunction to the fact that greener sellers set higher prices is compatible with the use and interpretation of energy-saving behaviour as a differentiation strategy. However, sellers do not exploit the resulting advantage through sufficiently high price-cost margins, because they seem trapped into “run to stay still” competition. Regarding the use of public subsidies to energy-saving sellers we uncover an undesirable crowding-out effect of consumers’ intrinsic tendency to support green manufacturers. Namely, consumers may be less willing to support a green seller whose energy-saving strategy yields a direct financial benefit. Finally, we disentangle two alternative motivations for consumer’s attractions to pro-social firms; first, the self-interested recognition of the firm’s contribution to the public and private welfare and, second, the need to compensate a firm for the cost entailed in each pro-social action. Our results show the prevalence of the former over the latter.
Resumo:
Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.