969 resultados para Multiple-trait model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract We present a refined parametric model for forecasting electricity demand which performed particularly well in the recent Global Energy Forecasting Competition (GEFCom 2012). We begin by motivating and presenting a simple parametric model, treating the electricity demand as a function of the temperature and day of the data. We then set out a series of refinements of the model, explaining the rationale for each, and using the competition scores to demonstrate that each successive refinement step increases the accuracy of the model’s predictions. These refinements include combining models from multiple weather stations, removing outliers from the historical data, and special treatments of public holidays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies evaluating model boundary-layer schemes focus either on near-surface parameters or on short-term observational campaigns. This reflects the observational datasets that are widely available for use in model evaluation. In this paper we show how surface and long-term Doppler lidar observations, combined in a way to match model representation of the boundary layer as closely as possible, can be used to evaluate the skill of boundary-layer forecasts. We use a 2-year observational dataset from a rural site in the UK to evaluate a climatology of boundary layer type forecast by the UK Met Office Unified Model. In addition, we demonstrate the use of a binary skill score (Symmetric Extremal Dependence Index) to investigate the dependence of forecast skill on season, horizontal resolution and forecast leadtime. A clear diurnal and seasonal cycle can be seen in the climatology of both the model and observations, with the main discrepancies being the model overpredicting cumulus capped and decoupled stratocumulus capped boundary-layers and underpredicting well mixed boundary-layers. Using the SEDI skill score the model is most skillful at predicting the surface stability. The skill of the model in predicting cumulus capped and stratocumulus capped stable boundary layer forecasts is low but greater than a 24 hr persistence forecast. In contrast, the prediction of decoupled boundary-layers and boundary-layers with multiple cloud layers is lower than persistence. This process based evaluation approach has the potential to be applied to other boundary-layer parameterisation schemes with similar decision structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new parameterisation that relates surface mass balance (SMB: the sum of surface accumulation and surface ablation) to changes in surface elevation of the Greenland ice sheet (GrIS) for the MAR (Modèle Atmosphérique Régional: Fettweis, 2007) regional climate model. The motivation is to dynamically adjust SMB as the GrIS evolves, allowing us to force ice sheet models with SMB simulated by MAR while incorporating the SMB–elevation feedback, without the substantial technical challenges of coupling ice sheet and climate models. This also allows us to assess the effect of elevation feedback uncertainty on the GrIS contribution to sea level, using multiple global climate and ice sheet models, without the need for additional, expensive MAR simulations. We estimate this relationship separately below and above the equilibrium line altitude (ELA, separating negative and positive SMB) and for regions north and south of 77� N, from a set of MAR simulations in which we alter the ice sheet surface elevation. These give four “SMB lapse rates”, gradients that relate SMB changes to elevation changes. We assess uncertainties within a Bayesian framework, estimating probability distributions for each gradient from which we present best estimates and credibility intervals (CI) that bound 95% of the probability. Below the ELA our gradient estimates are mostly positive, because SMB usually increases with elevation: 0.56 (95% CI: −0.22 to 1.33) kgm−3 a−1 for the north, and 1.91 (1.03 to 2.61) kgm−3 a−1 for the south. Above the ELA, the gradients are much smaller in magnitude: 0.09 (−0.03 to 0.23) kgm−3 a−1 in the north, and 0.07 (−0.07 to 0.59) kgm−3 a−1 in the south, because SMB can either increase or decrease in response to increased elevation. Our statistically founded approach allows us to make probabilistic assessments for the effect of elevation feedback uncertainty on sea level projections (Edwards et al., 2014).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we report coordinated multispacecraft and ground-based observations of a double substorm onset close to Scandinavia on November 17, 1996. The Wind and the Geotail spacecraft, which were located in the solar wind and the subsolar magnetosheath, respectively, recorded two periods of southward directed interplanetary magnetic field (IMF). These periods were separated by a short northward IMF excursion associated with a solar wind pressure pulse, which compressed the magnetosphere to such a degree that Geotail for a short period was located outside the bow shock. The first period of southward IMF initiated a substorm growth. phase, which was clearly detected by an array of ground-based instrumentation and by Interball in the northern tail lobe. A first substorm onset occurred in close relation to the solar wind pressure pulse impinging on the magnetopause and almost simultaneously with the northward turning of the IMF. However, this substorm did not fully develop. In clear association with the expansion of the magnetosphere at the end of the pressure pulse, the auroral expansion was stopped, and the northern sky cleared. We will present evidence that the change in the solar wind dynamic pressure actively quenched the energy available for any further substorm expansion. Directly after this period, the magnetometer network detected signatures of a renewed substorm growth phase, which was initiated by the second southward turning of the IMF and which finally lead to a second, and this time complete, substorm intensification. We have used our multipoint observations in order to understand the solar wind control of the substorm onset and substorm quenching. The relative timings between the observations on the various satellites and on the ground were used to infer a possible causal relationship between the solar wind pressure variations and consequent substorm development. Furthermore, using a relatively simple algorithm to model the tail lobe field and the total tail flux, we show that there indeed exists a close relationship between the relaxation of a solar wind pressure pulse, the reduction of the tail lobe field, and the quenching of the initial substorm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a simple, generic model of annual tree growth, called "T". This model accepts input from a first-principles light-use efficiency model (the "P" model). The P model provides values for gross primary production (GPP) per unit of absorbed photosynthetically active radiation (PAR). Absorbed PAR is estimated from the current leaf area. GPP is allocated to foliage, transport tissue, and fine-root production and respiration in such a way as to satisfy well-understood dimensional and functional relationships. Our approach thereby integrates two modelling approaches separately developed in the global carbon-cycle and forest-science literature. The T model can represent both ontogenetic effects (the impact of ageing) and the effects of environmental variations and trends (climate and CO2) on growth. Driven by local climate records, the model was applied to simulate ring widths during the period 1958–2006 for multiple trees of Pinus koraiensis from the Changbai Mountains in northeastern China. Each tree was initialised at its actual diameter at the time when local climate records started. The model produces realistic simulations of the interannual variability in ring width for different age cohorts (young, mature, and old). Both the simulations and observations show a significant positive response of tree-ring width to growing-season total photosynthetically active radiation (PAR0) and the ratio of actual to potential evapotranspiration (α), and a significant negative response to mean annual temperature (MAT). The slopes of the simulated and observed relationships with PAR0 and α are similar; the negative response to MAT is underestimated by the model. Comparison of simulations with fixed and changing atmospheric CO2 concentration shows that CO2 fertilisation over the past 50 years is too small to be distinguished in the ring-width data, given ontogenetic trends and interannual variability in climate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Break crops and multi-crop rotations are common in arable farm management, and the soil quality inherited from a previous crop is one of the parameters that determine the gross margin that is achieved with a given crop from a given parcel of land. In previous work we developed a dynamic economic model to calculate the potential yield and gross margin of a set of crops grown in a selection of typical rotation scenarios, and we reported use of the model to calculate coexistence costs for GM maize grown in a crop rotation. The model predicts economic effects of pest and weed pressures in monthly time steps. Validation of the model in respect of specific traits is proceeding as data from trials with novel crop varieties is published. Alongside this aspect of the validation process, we are able to incorporate data representing the economic impact of abiotic stresses on conventional crops, and then use the model to predict the cumulative gross margin achievable from a sequence of conventional crops grown at varying levels of abiotic stress. We report new progress with this aspect of model validation. In this paper, we report the further development of the model to take account of abiotic stress arising from drought, flood, heat or frost; such stresses being introduced in addition to variable pest and weed pressure. The main purpose is to assess the economic incentive for arable farmers to adopt novel crop varieties having multiple ‘stacked’ traits introduced by means of various biotechnological tools available to crop breeders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sustainable delivery of multiple ecosystem services requires the management of functionally diverse biological communities. In an agricultural context, an emphasis on food production has often led to a loss of biodiversity to the detriment of other ecosystem services such as the maintenance of soil health and pest regulation. In scenarios where multiple species can be grown together, it may be possible to better balance environmental and agronomic services through the targeted selection of companion species. We used the case study of legume-based cover crops to engineer a plant community that delivered the optimal balance of six ecosystem services: early productivity, regrowth following mowing, weed suppression, support of invertebrates, soil fertility building (measured as yield of following crop), and conservation of nutrients in the soil. An experimental species pool of 12 cultivated legume species was screened for a range of functional traits and ecosystem services at five sites across a geographical gradient in the United Kingdom. All possible species combinations were then analyzed, using a process-based model of plant competition, to identify the community that delivered the best balance of services at each site. In our system, low to intermediate levels of species richness (one to four species) that exploited functional contrasts in growth habit and phenology were identified as being optimal. The optimal solution was determined largely by the number of species and functional diversity represented by the starting species pool, emphasizing the importance of the initial selection of species for the screening experiments. The approach of using relationships between functional traits and ecosystem services to design multifunctional biological communities has the potential to inform the design of agricultural systems that better balance agronomic and environmental services and meet the current objective of European agricultural policy to maintain viable food production in the context of the sustainable management of natural resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Customers will not continue to pay for a service if it is perceived to be of poor quality, and/or of no value. With a paradigm shift towards business dependence on service orientated IS solutions [1], it is critical that alignment exists between service definition, delivery, and customer expectation, businesses are to ensure customer satisfaction. Services, and micro-service development, offer businesses a flexible structure for solution innovation, however, constant changes in technology, business and societal expectations means an iterative analysis solution is required to i) determine whether provider services adequately meet customer segment needs and expectations, and ii) to help guide business service innovation and development. In this paper, by incorporating multiple models, we propose a series of steps to help identify and prioritise service gaps. Moreover, the authors propose the Dual Semiosis Analysis Model, i.e. a tool that highlights where within the symbiotic customer / provider semiosis process, requirements misinterpretation, and/or service provision deficiencies occur. This paper offers the reader a powerful customer-centric tool, designed to help business managers highlight both what services are critical to customer quality perception, and where future innovation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000–2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October–January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo–Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of secondary inorganic aerosol are suppressed in the models because relative humidity (RH) is biased far too low in the boundary layer and thus foggy conditions are poorly represented in current models, the nitrate aerosol is either missing or inadequately accounted for, and emissions from agricultural waste burning and biofuel usage are too low in the emission inventories. These common problems and possible causes found in multiple models point out directions for future model improvements in this important region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trace element measurements in PM10–2.5, PM2.5–1.0 and PM1.0–0.3 aerosol were performed with 2 h time resolution at kerbside, urban background and rural sites during the ClearfLo winter 2012 campaign in London. The environment-dependent variability of emissions was characterized using the Multilinear Engine implementation of the positive matrix factorization model, conducted on data sets comprising all three sites but segregated by size. Combining the sites enabled separation of sources with high temporal covariance but significant spatial variability. Separation of sizes improved source resolution by preventing sources occurring in only a single size fraction from having too small a contribution for the model to resolve. Anchor profiles were retrieved internally by analysing data subsets, and these profiles were used in the analyses of the complete data sets of all sites for enhanced source apportionment. A total of nine different factors were resolved (notable elements in brackets): in PM10–2.5, brake wear (Cu, Zr, Sb, Ba), other traffic-related (Fe), resuspended dust (Si, Ca), sea/road salt (Cl), aged sea salt (Na, Mg) and industrial (Cr, Ni); in PM2.5–1.0, brake wear, other traffic-related, resuspended dust, sea/road salt, aged sea salt and S-rich (S); and in PM1.0–0.3, traffic-related (Fe, Cu, Zr, Sb, Ba), resuspended dust, sea/road salt, aged sea salt, reacted Cl (Cl), S-rich and solid fuel (K, Pb). Human activities enhance the kerb-to-rural concentration gradients of coarse aged sea salt, typically considered to have a natural source, by 1.7–2.2. These site-dependent concentration differences reflect the effect of local resuspension processes in London. The anthropogenically influenced factors traffic (brake wear and other traffic-related processes), dust and sea/road salt provide further kerb-to-rural concentration enhancements by direct source emissions by a factor of 3.5–12.7. The traffic and dust factors are mainly emitted in PM10–2.5 and show strong diurnal variations with concentrations up to 4 times higher during rush hour than during night-time. Regionally influenced S-rich and solid fuel factors, occurring primarily in PM1.0–0.3, have negligible resuspension influences, and concentrations are similar throughout the day and across the regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictability of high impact weather events on multiple time scales is a crucial issue both in scientific and socio-economic terms. In this study, a statistical-dynamical downscaling (SDD) approach is applied to an ensemble of decadal hindcasts obtained with the Max-Planck-Institute Earth System Model (MPI-ESM) to estimate the decadal predictability of peak wind speeds (as a proxy for gusts) over Europe. Yearly initialized decadal ensemble simulations with ten members are investigated for the period 1979–2005. The SDD approach is trained with COSMO-CLM regional climate model simulations and ERA-Interim reanalysis data and applied to the MPI-ESM hindcasts. The simulations for the period 1990–1993, which was characterized by several windstorm clusters, are analyzed in detail. The anomalies of the 95 % peak wind quantile of the MPI-ESM hindcasts are in line with the positive anomalies in reanalysis data for this period. To evaluate both the skill of the decadal predictability system and the added value of the downscaling approach, quantile verification skill scores are calculated for both the MPI-ESM large-scale wind speeds and the SDD simulated regional peak winds. Skill scores are predominantly positive for the decadal predictability system, with the highest values for short lead times and for (peak) wind speeds equal or above the 75 % quantile. This provides evidence that the analyzed hindcasts and the downscaling technique are suitable for estimating wind and peak wind speeds over Central Europe on decadal time scales. The skill scores for SDD simulated peak winds are slightly lower than those for large-scale wind speeds. This behavior can be largely attributed to the fact that peak winds are a proxy for gusts, and thus have a higher variability than wind speeds. The introduced cost-efficient downscaling technique has the advantage of estimating not only wind speeds but also estimates peak winds (a proxy for gusts) and can be easily applied to large ensemble datasets like operational decadal prediction systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first multi-model study to estimate the predictability of a boreal Sudden Stratospheric Warming (SSW) is performed using five NWP systems. During the 2012-2013 boreal winter, anomalous upward propagating planetary wave activity was observed towards the end of December, which followed by a rapid deceleration of the westerly circulation around 2 January 2013, and on 7 January 2013 the zonal mean zonal wind at 60°N and 10 hPa reversed to easterly. This stratospheric dynamical activity was followed by an equatorward shift of the tropospheric jet stream and by a high pressure anomaly over the North Atlantic, which resulted in severe cold conditions in the UK and Northern Europe. In most of the five models, the SSW event was predicted 10 days in advance. However, only some ensemble members in most of the models predicted weakening of westerly wind when the models were initialized 15 days in advance of the SSW. Further dynamical analysis of the SSW shows that this event was characterized by the anomalous planetary wave-1 amplification followed by the anomalous wave-2 amplification in the stratosphere, which resulted in a split vortex occurring between 6 January 2013 and 8 January 2013. The models have some success in reproducing wave-1 activity when initialized 15 days in advance, they but generally failed to produce the wave-2 activity during the final days of the event. Detailed analysis shows that models have reasonably good skill in forecasting tropospheric blocking features that stimulate wave-2 amplification in the troposphere, but they have limited skill in reproducing wave-2 amplification in the stratosphere.