911 resultados para linear calibration model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A non-linear perturbation model for river flow forecasting is developed, based on consideration of catchment wetness using an antecedent precipitation index (API). Catchment seasonality, of the form accounted for in the linear perturbation model (the LPM), and non-linear behaviour both in the runoff generation mechanism and in the flow routing processes are represented by a constrained nan-linear model, the NLPM-API. A total of ten catchments, across a range of climatic conditions and catchment area magnitudes, located in China and in other countries, were selected for testing daily rainfall-runoff forecasting with this model. It was found that the NLPM-API model was significantly more efficient than the original linear perturbation model (the LPM). However, restric tion of explicit nan-linearity to the runoff generation process, in the simpler LPM-API form of the model, did not produce a significantly lower value of the efficiency in flood forecasting, in terms of the model efficiency index R-2. (C) 1997 Elsevier Science B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Rashba spin-orbit splitting of a hydrogenic donor impurity in GaAs/GaAlAs quantum wells is investigated theoretically in the framework of effective-mass envelope function theory. The Rashba effect near the interface between GaAs and GaAlAs is assumed to be a linear relation with the distance from the quantum well side. We find that the splitting energy of the excited state is larger and less dependent on the position of the impurity than that of the ground state. Our results are useful for the application of Rashba spin-orbit coupling to photoelectric devices.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard linear-quadratic survival model for radiotherapy is used to investigate different schedules of radiation treatment planning to study how these may be affected by different tumour repopulation kinetics between treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes the development and application of a multiple linear regression model to identify how the key elements of waste and recycling infrastructure, namely container capacity and frequency of collection affect the yield from municipal kerbside recycling programmes. The overall aim of the research was to gain an understanding of the factors affecting the yield from municipal kerbside recycling programmes in Scotland. The study isolates the principal kerbside collection service offered by 32 councils across Scotland, eliminating those recycling programmes associated with flatted properties or multi occupancies. The results of a regression analysis model has identified three principal factors which explain 80% of the variability in the average yield of the principal dry recyclate services: weekly residual waste capacity, number of materials collected and the weekly recycling capacity. The use of the model has been evaluated and recommendations made on ongoing methodological development and the use of the results in informing the design of kerbside recycling programmes. The authors hope that the research can provide insights for the ongoing development of methods to optimise the design and operation of kerbside recycling programmes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear models of bidirectional reflectance distribution are useful tools for understanding the angular variability of surface reflectance as observed by medium-resolution sensors such as the Moderate Resolution Imaging Spectrometer. These models are operationally used to normalize data to common view and illumination geometries and to calculate integral quantities such as albedo. Currently, to compensate for noise in observed reflectance, these models are inverted against data collected during some temporal window for which the model parameters are assumed to be constant. Despite this, the retrieved parameters are often noisy for regions where sufficient observations are not available. This paper demonstrates the use of Lagrangian multipliers to allow arbitrarily large windows and, at the same time, produce individual parameter sets for each day even for regions where only sparse observations are available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of Fock space representation is developed to deal with stochastic spin lattices written in terms of fermion operators. A density operator is introduced in order to follow in parallel the developments of the case of bosons in the literature. Some general conceptual quantities for spin lattices are then derived, including the notion of generating function and path integral via Grassmann variables. The formalism is used to derive the Liouvillian of the d-dimensional Linear Glauber dynamics in the Fock-space representation. Then the time evolution equations for the magnetization and the two-point correlation function are derived in terms of the number operator. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficiently inducing precise causal models accurately reflecting given data sets is the ultimate goal of causal discovery. The algorithms proposed by Dai et al. has demonstrated the ability of the Minimum Message Length (MML) principle in discovering Linear Causal Models from training data. In order to further explore ways to improve efficiency, this paper incorporates the Hoeffding Bounds into the learning process. At each step of causal discovery, if a small number of data items is enough to distinguish the better model from the rest, the computation cost will be reduced by ignoring the other data items. Experiments with data set from related benchmark models indicate that the new algorithm achieves speedup over previous work in terms of learning efficiency while preserving the discovery accuracy.