915 resultados para Equation of prediction
Resumo:
This paper develops cycle-level FPGA circuits of an organization for a fast path-based neural branch predictor Our results suggest that practical sizes of prediction tables are limited to around 32 KB to 64 KB in current FPGA technology due mainly to FPGA area of logic resources to maintain the tables. However the predictor scales well in terms of prediction speed. Table sizes alone should not be used as the only metric for hardware budget when comparing neural-based predictor to predictors of totally different organizations. This paper also gives early evidence to shift the attention on to the recovery from mis-prediction latency rather than on prediction latency as the most critical factor impacting accuracy of predictions for this class of branch predictors.
Resumo:
Typically, algorithms for generating stereo disparity maps have been developed to minimise the energy equation of a single image. This paper proposes a method for implementing cross validation in a belief propagation optimisation. When tested using the Middlebury online stereo evaluation, the cross validation improves upon the results of standard belief propagation. Furthermore, it has been shown that regions of homogeneous colour within the images can be used for enforcing the so-called "Segment Constraint". Developing from this, Segment Support is introduced to boost belief between pixels of the same image region and improve propagation into textureless regions.
Resumo:
In this work we consider the rendering equation derived from the illumination model called Cook-Torrance model. A Monte Carlo (MC) estimator for numerical treatment of the this equation, which is the Fredholm integral equation of second kind, is constructed and studied.
Resumo:
A definition is given for the characteristic equation of anN-partitioned matrix. It is then proved that this matrix satisfies its own characteristic equation. This can then be regarded as a version of the Cayley-Hamilton theorem, of use withN-dimensional systems.
Resumo:
Data assimilation is predominantly used for state estimation; combining observational data with model predictions to produce an updated model state that most accurately approximates the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. Even with perfect initial data, inaccurate model parameters will lead to the growth of prediction errors. To generate reliable forecasts we need good estimates of both the current system state and the model parameters. This paper presents research into data assimilation methods for morphodynamic model state and parameter estimation. First, we focus on state estimation and describe implementation of a three dimensional variational(3D-Var) data assimilation scheme in a simple 2D morphodynamic model of Morecambe Bay, UK. The assimilation of observations of bathymetry derived from SAR satellite imagery and a ship-borne survey is shown to significantly improve the predictive capability of the model over a 2 year run. Here, the model parameters are set by manual calibration; this is laborious and is found to produce different parameter values depending on the type and coverage of the validation dataset. The second part of this paper considers the problem of model parameter estimation in more detail. We explain how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state. This approach removes inefficiencies associated with manual calibration and enables more effective use of observational data. We outline the development of a novel hybrid sequential 3D-Var data assimilation algorithm for joint state-parameter estimation and demonstrate its efficacy using an idealised 1D sediment transport model. The results of this study are extremely positive and suggest that there is great potential for the use of data assimilation-based state-parameter estimation in coastal morphodynamic modelling.
Resumo:
The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.
Resumo:
The contribution non-point P sources make to the total P loading on water bodies in agricultural catchments has not been fully appreciated. Using data derived from plot scale experimental studies, and modelling approaches developed to simulate system behaviour under differing management scenarios, a fuller understanding of the processes controlling P export and transformations along non-point transport pathways can be achieved. One modelling approach which has been successfully applied to large UK catchments (50-350km2 in area) is applied here to a small, 1.5 km2 experimental catchment. The importance of scaling is discussed in the context of how such approaches can extrapolate the results from plot-scale experimental studies to full catchment scale. However, the scope of such models is limited, since they do not at present directly simulate the processes controlling P transport and transformation dynamics. As such, they can only simulate total P export on an annual basis, and are not capable of prediction over shorter time scales. The need for development of process-based models to help answer these questions, and for more comprehensive UK experimental studies is highlighted as a pre-requisite for the development of suitable and sustainable management strategies to reduce non-point P loading on water bodies in agricultural catchments.
Resumo:
The paper considers second kind equations of the form (abbreviated x=y + K2x) in which and the factor z is bounded but otherwise arbitrary so that equations of Wiener-Hopf type are included as a special case. Conditions on a set are obtained such that a generalized Fredholm alternative is valid: if W satisfies these conditions and I − Kz, is injective for each z ε W then I − Kz is invertible for each z ε W and the operators (I − Kz)−1 are uniformly bounded. As a special case some classical results relating to Wiener-Hopf operators are reproduced. A finite section version of the above equation (with the range of integration reduced to [−a, a]) is considered, as are projection and iterated projection methods for its solution. The operators (where denotes the finite section version of Kz) are shown uniformly bounded (in z and a) for all a sufficiently large. Uniform stability and convergence results, for the projection and iterated projection methods, are obtained. The argument generalizes an idea in collectively compact operator theory. Some new results in this theory are obtained and applied to the analysis of projection methods for the above equation when z is compactly supported and k(s − t) replaced by the general kernel k(s,t). A boundary integral equation of the above type, which models outdoor sound propagation over inhomogeneous level terrain, illustrates the application of the theoretical results developed.
Resumo:
Although it plays a key role in the theory of stratified turbulence, the concept of available potential energy (APE) dissipation has remained until now a rather mysterious quantity, owing to the lack of rigorous result about its irreversible character or energy conversion type. Here, we show by using rigorous energetics considerations rooted in the analysis of the Navier-Stokes for a fully compressible fluid with a nonlinear equation of state that the APE dissipation is an irreversible energy conversion that dissipates kinetic energy into internal energy, exactly as viscous dissipation. These results are established by showing that APE dissipation contributes to the irreversible production of entropy, and by showing that it is a part of the work of expansion/contraction. Our results provide a new interpretation of the entropy budget, that leads to a new exact definition of turbulent effective diffusivity, which generalizes the Osborn-Cox model, as well as a rigorous decomposition of the work of expansion/contraction into reversible and irreversible components. In the context of turbulent mixing associated with parallel shear flow instability, our results suggests that there is no irreversible transfer of horizontal momentum into vertical momentum, as seems to be required when compressible effects are neglected, with potential consequences for the parameterisations of momentum dissipation in the coarse-grained Navier-Stokes equations.
Resumo:
This study examines, in a unified fashion, the budgets of ocean gravitational potential energy (GPE) and available gravitational potential energy (AGPE) in the control simulation of the coupled atmosphere–ocean general circulation model HadCM3. Only AGPE can be converted into kinetic energy by adiabatic processes. Diapycnal mixing supplies GPE, but not AGPE, whereas the reverse is true of the combined effect of surface buoyancy forcing and convection. Mixing and buoyancy forcing, thus, play complementary roles in sustaining the large scale circulation. However, the largest globally integrated source of GPE is resolved advection (+0.57 TW) and the largest sink is through parameterized eddy transports (-0.82 TW). The effect of these adiabatic processes on AGPE is identical to their effect on GPE, except for perturbations to both budgets due to numerical leakage exacerbated by non-linearities in the equation of state.
Resumo:
In traditional and geophysical fluid dynamics, it is common to describe stratified turbulent fluid flows with low Mach number and small relative density variations by means of the incompressible Boussinesq approximation. Although such an approximation is often interpreted as decoupling the thermodynamics from the dynamics, this paper reviews recent results and derive new ones that show that the reality is actually more subtle and complex when diabatic effects and a nonlinear equation of state are retained. Such an analysis reveals indeed: (1) that the compressible work of expansion/contraction remains of comparable importance as the mechanical energy conversions in contrast to what is usually assumed; (2) in a Boussinesq fluid, compressible effects occur in the guise of changes in gravitational potential energy due to density changes. This makes it possible to construct a fully consistent description of the thermodynamics of incompressible fluids for an arbitrary nonlinear equation of state; (3) rigorous methods based on using the available potential energy and potential enthalpy budgets can be used to quantify the work of expansion/contraction B in steady and transient flows, which reveals that B is predominantly controlled by molecular diffusive effects, and act as a significant sink of kinetic energy.
Resumo:
The region of sea ice near the edge of the sea ice pack is known as the marginal ice zone (MIZ), and its dynamics are complicated by ocean wave interaction with the ice cover, strong gradients in the atmosphere and ocean and variations in sea ice rheology. This paper focuses on the role of sea ice rheology in determining the dynamics of the MIZ. Here, sea ice is treated as a granular material with a composite rheology describing collisional ice floe interaction and plastic interaction. The collisional component of sea ice rheology depends upon the granular temperature, a measure of the kinetic energy of flow fluctuations. A simplified model of the MIZ is introduced consisting of the along and across momentum balance of the sea ice and the balance equation of fluctuation kinetic energy. The steady solution of these equations is found to leading order using elementary methods. This reveals a concentrated region of rapid ice flow parallel to the ice edge, which is in accordance with field observations, and previously called the ice jet. Previous explanations of the ice jet relied upon the existence of ocean currents beneath the ice cover. We show that an ice jet results as a natural consequence of the granular nature of sea ice.
Resumo:
Using annual observations on industrial production over the last three centuries, and on GDP over a 100-year period, we seek an historical perspective on the forecastability of these UK output measures. The series are dominated by strong upward trends, so we consider various specifications of this, including the local linear trend structural time-series model, which allows the level and slope of the trend to vary. Our results are not unduly sensitive to how the trend in the series is modelled: the average sizes of the forecast errors of all models, and the wide span of prediction intervals, attests to a great deal of uncertainty in the economic environment. It appears that, from an historical perspective, the postwar period has been relatively more forecastable.
Resumo:
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.
Resumo:
The transition redshift (deceleration/acceleration) is discussed by expanding the deceleration parameter to first order around its present value. A detailed study is carried out by considering two different parametrizations, q = q(0) + q(1)z and q = q(0) + q(1)z(1 + z)(-1), and the associated free parameters (q(0), q(1)) are constrained by three different supernovae (SNe) samples. A previous analysis by Riess et al. using the first expansion is slightly improved and confirmed in light of their recent data (Gold07 sample). However, by fitting the model with the Supernova Legacy Survey (SNLS) type Ia sample, we find that the best fit to the redshift transition is z(t) = 0.61, instead of z(t) = 0.46 as derived by the High-z Supernovae Search (HZSNS) team. This result based in the SNLS sample is also in good agreement with the sample of Davis et al., z(t) = 0.60(-0.11)(+0.28) (1 sigma). Such results are in line with some independent analyses and accommodate more easily the concordance flat model (Lambda CDM). For both parametrizations, the three SNe Ia samples considered favour recent acceleration and past deceleration with a high degree of statistical confidence level. All the kinematic results presented here depend neither on the validity of general relativity nor on the matter-energy contents of the Universe.