14 resultados para upper bound

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the measurement of the characteristics of nonlinear systems by crosscorrelation, using pseudorandom input signals based on m sequences. The systems are characterised by Volterra series, and analytical expressions relating the rth order Volterra kernel to r-dimensional crosscorrelation measurements are derived. It is shown that the two-dimensional crosscorrelation measurements are related to the corresponding second order kernel values by a set of equations which may be structured into a number of independent subsets. The m sequence properties determine how the maximum order of the subsets for off-diagonal values is related to the upper bound of the arguments for nonzero kernel values. The upper bound of the arguments is used as a performance index, and the performance of antisymmetric pseudorandom binary, ternary and quinary signals is investigated. The performance indices obtained above are small in relation to the periods of the corresponding signals. To achieve higher performance with ternary signals, a method is proposed for combining the estimates of the second order kernel values so that the effects of some of the undesirable nonzero values in the fourth order autocorrelation function of the input signal are removed. The identification of the dynamics of two-input, single-output systems with multiplicative nonlinearity is investigated. It is shown that the characteristics of such a system may be determined by crosscorrelation experiments using phase-shifted versions of a common signal as inputs. The effects of nonlinearities on the estimates of system weighting functions obtained by crosscorrelation are also investigated. Results obtained by correlation testing of an industrial process are presented, and the differences between theoretical and experimental results discussed for this case;

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A review of published literature was made to establish the fundamental aspects of rolling and allow an experimental programme to be planned. Simulated hot rolling tests, using pure lead as a model material, were performed on a laboratory mill to obtain data on load and torque when rolling square section stock. Billet metallurgy and consolidation of representative defects was studied when modelling the rolling of continuously cast square stock with a view to determining optimal reduction schedules that would result in a product having properties to the high level found in fully wrought billets manufactured from large ingots. It is difficult to characterize sufficiently the complexity of the porous central region in a continuously cast billet for accurate modelling. However, holes drilled into a lead billet prior to rolling was found to be a good means of assessing central void consolidation in the laboratory. A rolling schedule of 30% (1.429:1) per pass to a total of 60% (2.5:1) will give a homogeneous, fully recrystallized product. To achieve central consolidation, a total reduction of approximately 70% (3.333:1) is necessary. At the reduction necessary to achieve consolidation, full recrystallization is assured. A theoretical analysis using a simplified variational principle with experimentally derived spread data has been developed for a homogeneous material. An upper bound analysis of a single, centrally situated void has been shown to give good predictions of void closure with reduction and the reduction required for void closure for initial void area fractions 0.45%. A limited number of tests in the works has indicated compliance with the results for void closure obtained in the laboratory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Economic factors such as the rise in cost of raw materials, labour and power, are compelling manufacturers of cold-drawn polygonal sections, to seek new production routes which will enable the expansion in the varieties of metals used and the inclusion of difficult-to-draw materials. One such method generating considerable industrial interest is the drawing of polygonal sections from round at elevated temperature. The technique of drawing mild steel, medium carbon steel and boron steel wire into octagonal, hexagonal and square sections from round at up to 850 deg C and 50% reduction of area in one pass has been established. The main objective was to provide a basic understanding of the process, with particular emphasis being placed on modelling using both experimental and theoretical considerations. Elevated temperature stress-strain data was obtained using a modified torsion testing machine. Data were used in the upper bound solution derived and solved numerically to predict drawing stress strain, strain-rate, temperature and flow stress distribution in the deforming zone for a range of variables. The success of this warm working process will, of course, depend on the use of a satisfactory elevated temperature lubricant, an efficient cooling system, a suitable tool material having good wear and thermal shock resistance and an efficient die profile design which incorporates the principle of least work. The merits and demerits of die materials such as tungsten carbide, chromium carbide, Syalon and Stellite are discussed, principally from the standpoint of minimising drawing force and die wear. Generally, the experimental and theoretical results were in good agreement, the drawing stress could be predicted within close limits and the process proved to be technically feasible. Finite element analysis has been carried out on the various die geometries and die materials, to gain a greater understanding of the behaviour of these dies under the process of elevated temperature drawing, and to establish the temperature distribution and thermal distortion in the deforming zone, thus establishing the optimum die design and die material for the process. It is now possible to predict, for the materials already tested, (i) the optimum drawing temperature range, (ii) the maximum possible reduction of area per pass, (iii) the optimum drawing die profiles and die materials, (iv) the most efficient lubricant in terms of reducing the drawing force and die wear.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit -the upper bound of regeneration efficiency -is derived. © 2014 Macmillan Publishers Limited. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Consideration of the influence of test technique and data analysis method is important for data comparison and design purposes. The paper highlights the effects of replication interval, crack growth rate averaging and curve-fitting procedures on crack growth rate results for a Ni-base alloy. It is shown that an upper bound crack growth rate line is not appropriate for use in fatigue design, and that the derivative of a quadratic fit to the a vs N data looks promising. However, this type of averaging, or curve fitting, is not useful in developing an understanding of microstructure/crack tip interactions. For this purpose, simple replica-to-replica growth rate calculations are preferable. © 1988.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exponentially increasing demand on operational data rate has been met with technological advances in telecommunication systems such as advanced multilevel and multidimensional modulation formats, fast signal processing, and research into new different media for signal transmission. Since the current communication channels are essentially nonlinear, estimation of the Shannon capacity for modern nonlinear communication channels is required. This PhD research project has targeted the study of the capacity limits of different nonlinear communication channels with a view to enable a significant enhancement in the data rate of the currently deployed fiber networks. In the current study, a theoretical framework for calculating the Shannon capacity of nonlinear regenerative channels has been developed and illustrated on the example of the proposed here regenerative Fourier transform (RFT). Moreover, the maximum gain in Shannon capacity due to regeneration (that is, the Shannon capacity of a system with ideal regenerators – the upper bound on capacity for all regenerative schemes) is calculated analytically. Thus, we derived a regenerative limit to which the capacity of any regenerative system can be compared, as analogue of the seminal linear Shannon limit. A general optimization scheme (regenerative mapping) has been introduced and demonstrated on systems with different regenerative elements: phase sensitive amplifiers and the proposed here multilevel regenerative schemes: the regenerative Fourier transform and the coupled nonlinear loop mirror.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the design of nonlinear regenerative communication channels that have capacity above the classical Shannon capacity of the linear additive white Gaussian noise channel. The upper bound for regeneration efficiency is found and the asymptotic behavior of the capacity in the saturation regime is derived. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.