45 resultados para time dependant cost function
em CentAUR: Central Archive University of Reading - UK
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.
Resumo:
Distributed generation plays a key role in reducing CO2 emissions and losses in transmission of power. However, due to the nature of renewable resources, distributed generation requires suitable control strategies to assure reliability and optimality for the grid. Multi-agent systems are perfect candidates for providing distributed control of distributed generation stations as well as providing reliability and flexibility for the grid integration. The proposed multi-agent energy management system consists of single-type agents who control one or more gird entities, which are represented as generic sub-agent elements. The agent applies one control algorithm across all elements and uses a cost function to evaluate the suitability of the element as a supplier. The behavior set by the agent's user defines which parameters of an element have greater weight in the cost function, which allows the user to specify the preference on suppliers dynamically. This study shows the ability of the multi-agent energy management system to select suppliers according to the selection behavior given by the user. The optimality of the supplier for the required demand is ensured by the cost function based on the parameters of the element.
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas and performing a monochromatic radiation calculation for each point. In this presentation it is shown that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K/day due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such that they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide, and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K/day can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K/day for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
Toward the ultimate goal of replacing field-based evaluation of seasonal growth habit, we describe the design and validation of a multiplex polymerase chain reaction assay diagnostic for allelic status at the barley (Hordeum vulgare ssp. vulgare L.) vernalization locus, VRN-H1 By assaying for the presence of all known insertion–deletion polymorphisms thought to be responsible for the difference between spring and winter alleles, this assay directly tests for the presence of functional polymorphism at VRN-H1 Four of the nine previously recognized VRN-H1 haplotypes (including both winter alleles) give unique profiles using this assay. The remaining five spring haplotypes share a single profile, indicative of function-altering deletions spanning, or adjacent to, the putative “vernalization critical” region of intron 1. When used in conjunction with a previously published PCR-based assay diagnostic for alleles at VRN-H2, it was possible to predict growth habit in all the 100 contemporary UK spring and winter lines analyzed in this study. This assay is likely to find application in instances when seasonal growth habit needs to be determined without the time and cost of phenotypic assessment and during marker-assisted selection using conventional and multicross population analysis.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Resumo:
Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.
Resumo:
The formulation of four-dimensional variational data assimilation allows the incorporation of constraints into the cost function which need only be weakly satisfied. In this paper we investigate the value of imposing conservation properties as weak constraints. Using the example of the two-body problem of celestial mechanics we compare weak constraints based on conservation laws with a constraint on the background state.We show how the imposition of conservation-based weak constraints changes the nature of the gradient equation. Assimilation experiments demonstrate how this can add extra information to the assimilation process, even when the underlying numerical model is conserving.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights
Resumo:
Maize silage nutritive quality is routinely determined by near infrared reflectance spectroscopy (NIRS). However, little is known about the impact of sample preparation on the accuracy of the calibration to predict biological traits. A sample population of 48 maize silages representing a wide range of physiological maturities was used in a study to determine the impact of different sample preparation procedures (i.e., drying regimes; the presence or absence of residual moisture; the degree of particle comminution) on resultant NIR prediction statistics. All silages were scanned using a total of 12 combinations of sample pre-treatments. Each sample preparation combination was subjected to three multivariate regression techniques to give a total of 36 predictions per biological trait. Increased sample preparations procedure, relative to scanning the unprocessed whole plant (WP) material, always resulted in a numerical minimisation of model statistics. However, the ability of each of the treatments to significantly minimise the model statistics differed. Particle comminution was the most important factor, oven-drying regime was intermediate, and residual moisture presence was the least important. Models to predict various biological parameters of maize silage will be improved if material is subjected to a high degree of particle comminution (i.e., having been passed through a 1 mm screen) and developed on plant material previously dried at 60 degrees C. The extra effort in terms of time and cost required to remove sample residual moisture cannot be justified. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The paper develops a measure of consumer welfare losses associated with withholding it formation about a possible link between BSE and vCJD. The Cost of Ignorance (COI) is measured by comparing the utility of the informed choice with the utility of the uninformed choice, under conditions of improved information. Unlike previous work that is largely based on a single equation demand model, the measure is obtained retrieving a cost,function from a dynamic Almost Ideal Demand System. The estimated perceived loss for Italian consumers due to delayed information ranges from 12 percent to 54 percent of total meat expenditure, depending on the month assumed to embody correct beliefs about the safety level of beef.
Resumo:
Hidden Markov Models (HMMs) have been successfully applied to different modelling and classification problems from different areas over the recent years. An important step in using HMMs is the initialisation of the parameters of the model as the subsequent learning of HMM’s parameters will be dependent on these values. This initialisation should take into account the knowledge about the addressed problem and also optimisation techniques to estimate the best initial parameters given a cost function, and consequently, to estimate the best log-likelihood. This paper proposes the initialisation of Hidden Markov Models parameters using the optimisation algorithm Differential Evolution with the aim to obtain the best log-likelihood.