934 resultados para Random parameter Logit Model
Resumo:
The nuclear time-dependent Hartree-Fock model formulated in three-dimensional space, based on the full standard Skyrme energy density functional complemented with the tensor force, is presented. Full self-consistency is achieved by the model. The application to the isovector giant dipole resonance is discussed in the linear limit, ranging from spherical nuclei (16O and 120Sn) to systems displaying axial or triaxial deformation (24Mg, 28Si, 178Os, 190W and 238U). Particular attention is paid to the spin-dependent terms from the central sector of the functional, recently included together with the tensor. They turn out to be capable of producing a qualitative change on the strength distribution in this channel. The effect on the deformation properties is also discussed. The quantitative effects on the linear response are small and, overall, the giant dipole energy remains unaffected. Calculations are compared to predictions from the (quasi)-particle random-phase approximation and experimental data where available, finding good agreement
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
The canopy interception capacity is a small but key part of the surface hydrology, which affects the amount of water intercepted by vegetation and therefore the partitioning of evaporation and transpiration. However, little research with climate models has been done to understand the effects of a range of possible canopy interception capacity parameter values. This is in part due to the assumption that it does not significantly affect climate. Near global evapotranspiration products now make evaluation of canopy interception capacity parameterisations possible. We use a range of canopy water interception capacity values from the literature to investigate the effect on climate within the climate model HadCM3. We find that the global mean temperature is affected by up to -0.64 K globally and -1.9 K regionally. These temperature impacts are predominantly due to changes in the evaporative fraction and top of atmosphere albedo. In the tropics, the variations in evapotranspiration affect precipitation, significantly enhancing rainfall. Comparing the model output to measurements, we find that the default canopy interception capacity parameterisation overestimates canopy interception loss (i.e. canopy evaporation) and underestimates transpiration. Overall, decreasing canopy interception capacity improves the evapotranspiration partitioning in HadCM3, though the measurement literature more strongly supports an increase. The high sensitivity of climate to the parameterisation of canopy interception capacity is partially due to the high number of light rain-days in the climate model that means that interception is overestimated. This work highlights the hitherto underestimated importance of canopy interception capacity in climate model hydroclimatology and the need to acknowledge the role of precipitation representation limitations in determining parameterisations.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
We used a light-use efficiency model of photosynthesis coupled with a dynamic carbon allocation and tree-growth model to simulate annual growth of the gymnosperm Callitris columellaris in the semi-arid Great Western Woodlands, Western Australia, over the past 100 years. Parameter values were derived from independent observations except for sapwood specific respiration rate, fine-root turnover time, fine-root specific respiration rate and the ratio of fine-root mass to foliage area, which were estimated by Bayesian optimization. The model reproduced the general pattern of interannual variability in radial growth (tree-ring width), including the response to the shift in precipitation regimes that occurred in the 1960s. Simulated and observed responses to climate were consistent. Both showed a significant positive response of tree-ring width to total photosynthetically active radiation received and to the ratio of modeled actual to equilibrium evapotranspiration, and a significant negative response to vapour pressure deficit. However, the simulations showed an enhancement of radial growth in response to increasing atmospheric CO2 concentration (ppm) ([CO2]) during recent decades that is not present in the observations. The discrepancy disappeared when the model was recalibrated on successive 30-year windows. Then the ratio of fine-root mass to foliage area increases by 14% (from 0.127 to 0.144 kg C m-2) as [CO2] increased while the other three estimated parameters remained constant. The absence of a signal of increasing [CO2] has been noted in many tree-ring records, despite the enhancement of photosynthetic rates and water-use efficiency resulting from increasing [CO2]. Our simulations suggest that this behaviour could be explained as a consequence of a shift towards below-ground carbon allocation.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
In the Coupled Model Intercomparison Project Phase 5 (CMIP5), the model-mean increase in global mean surface air temperature T under the 1pctCO2 scenario (atmospheric CO2 increasing at 1% yr−1) during the second doubling of CO2 is 40% larger than the transient climate response (TCR), i.e. the increase in T during the first doubling. We identify four possible contributory effects. First, the surface climate system loses heat less readily into the ocean beneath as the latter warms. The model spread in the thermal coupling between the upper and deep ocean largely explains the model spread in ocean heat uptake efficiency. Second, CO2 radiative forcing may rise more rapidly than logarithmically with CO2 concentration. Third, the climate feedback parameter may decline as the CO2 concentration rises. With CMIP5 data, we cannot distinguish the second and third possibilities. Fourth, the climate feedback parameter declines as time passes or T rises; in 1pctCO2, this effect is less important than the others. We find that T projected for the end of the twenty-first century correlates more highly with T at the time of quadrupled CO2 in 1pctCO2 than with the TCR, and we suggest that the TCR may be underestimated from observed climate change.
Resumo:
We study cartel stability in a differentiated price-setting duopoly with returns to scale. We show that a cartel may be equally stable in the presence of lower differentiation, provided that the decreasing returns parameter is high. In addition we demonstrate that for a given factor of discount, there are technologies that can have decreasing returns to scale where the cartel always is stable independent of the differentiation degree.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
Background: The aim of this study is to verify the regenerative potential of particulate anorganic bone matrix synthetic peptide-15 (ABM-P-15) in class III furcation defects associated or not with expanded polytetrafluoroethylene membranes. Methods: Class III furcation defects were produced in the mandibular premolars (P2, P3, and P4) of six dogs and filled with impression material. The membranes and the bone grafts were inserted into P3 and P4, which were randomized to form the test and control groups, respectively; P2 was the negative control group. The animals were sacrificed 3 months post-treatment. Results: Histologically, the complete closure of class III furcation defects was not observed in any of the groups. Partial periodontal regeneration with similar morphologic characteristics among the groups was observed, however, through the formation of new cementum, periodontal ligament, and bone above the notch. Histologic analysis showed granules from the bone graft surrounded by immature bone matrix and encircled by newly formed tissue in the test group. The new bone formation area found in the negative control group was 2.28 +/- 2.49 mm(2) and in the test group it was 6.52 +/- 5.69 mm(2), which showed statistically significant differences for these groups considering this parameter (Friedman test P <0.05). There was no statistically significant difference among the negative control, control, and test groups for the other parameters. Conclusions: The regenerative potential of ABM-P-15 was demonstrated through new bone formation circumscribing and above the graft particles. The new bone also was accompanied by the formation of new cementum and periodontal ligament fibers. J Periodontol 2010;81:594-603.
Resumo:
The objective of this study was to evaluate the possible use of biometric testicular traits as selection criteria for young Nellore bulls using Bayesian inference to estimate heritability coefficients and genetic correlations. Multitrait analysis was performed including 17,211 records of scrotal circumference obtained during andrological assessment (SCAND) and 15,313 records of testicular volume and shape. In addition, 50,809 records of scrotal circumference at 18 mo (SC18), used as an anchor trait, were analyzed. The (co) variance components and breeding values were estimated by Gibbs sampling using the Gibbs2F90 program under an animal model that included contemporary groups as fixed effects, age of the animal as a linear covariate, and direct additive genetic effects as random effects. Heritabilities of 0.42, 0.43, 0.31, 0.20, 0.04, 0.16, 0.15, and 0.10 were obtained for SC18, SCAND, testicular volume, testicular shape, minor defects, major defects, total defects, and satisfactory andrological evaluation, respectively. The genetic correlations between SC18 and the other traits were 0.84 (SCAND), 0.75 (testicular shape), 0.44 (testicular volume), -0.23 (minor defects), -0.16 (major defects), -0.24 (total defects), and 0.56 (satisfactory andrological evaluation). Genetic correlations of 0.94 and 0.52 were obtained between SCAND and testicular volume and shape, respectively, and of 0.52 between testicular volume and testicular shape. In addition to favorable genetic parameter estimates, SC18 was found to be the most advantageous testicular trait due to its easy measurement before andrological assessment of the animals, even though the utilization of biometric testicular traits as selection criteria was also found to be possible. In conclusion, SC18 and biometric testicular traits can be adopted as a selection criterion to improve the fertility of young Nellore bulls.
Resumo:
A new inflationary scenario whose exponential potential V (Phi) has a quadratic dependence on the field Phi in addition to the standard linear term is confronted with the five-year observations of the Wilkinson-Microwave Anisotropy Probe and the Sloan Digital Sky Survey data. The number of e-folds (N), the ratio of tensor-to-scalar perturbations (r), the spectral scalar index of the primordial power spectrum (n(s)) and its running (dn(s)/d ln k) depend on the dimensionless parameter a multiplying the quadratic term in the potential. In the limit a. 0 all the results of the exponential potential are fully recovered. For values of alpha not equal 0, we find that the model predictions are in good agreement with the current observations of the Cosmic Microwave Background (CMB) anisotropies and Large-Scale Structure (LSS) in the Universe. Copyright (C) EPLA, 2008.