952 resultados para upper bound solution


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is concerned with the measurement of the characteristics of nonlinear systems by crosscorrelation, using pseudorandom input signals based on m sequences. The systems are characterised by Volterra series, and analytical expressions relating the rth order Volterra kernel to r-dimensional crosscorrelation measurements are derived. It is shown that the two-dimensional crosscorrelation measurements are related to the corresponding second order kernel values by a set of equations which may be structured into a number of independent subsets. The m sequence properties determine how the maximum order of the subsets for off-diagonal values is related to the upper bound of the arguments for nonzero kernel values. The upper bound of the arguments is used as a performance index, and the performance of antisymmetric pseudorandom binary, ternary and quinary signals is investigated. The performance indices obtained above are small in relation to the periods of the corresponding signals. To achieve higher performance with ternary signals, a method is proposed for combining the estimates of the second order kernel values so that the effects of some of the undesirable nonzero values in the fourth order autocorrelation function of the input signal are removed. The identification of the dynamics of two-input, single-output systems with multiplicative nonlinearity is investigated. It is shown that the characteristics of such a system may be determined by crosscorrelation experiments using phase-shifted versions of a common signal as inputs. The effects of nonlinearities on the estimates of system weighting functions obtained by crosscorrelation are also investigated. Results obtained by correlation testing of an industrial process are presented, and the differences between theoretical and experimental results discussed for this case;

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A review of published literature was made to establish the fundamental aspects of rolling and allow an experimental programme to be planned. Simulated hot rolling tests, using pure lead as a model material, were performed on a laboratory mill to obtain data on load and torque when rolling square section stock. Billet metallurgy and consolidation of representative defects was studied when modelling the rolling of continuously cast square stock with a view to determining optimal reduction schedules that would result in a product having properties to the high level found in fully wrought billets manufactured from large ingots. It is difficult to characterize sufficiently the complexity of the porous central region in a continuously cast billet for accurate modelling. However, holes drilled into a lead billet prior to rolling was found to be a good means of assessing central void consolidation in the laboratory. A rolling schedule of 30% (1.429:1) per pass to a total of 60% (2.5:1) will give a homogeneous, fully recrystallized product. To achieve central consolidation, a total reduction of approximately 70% (3.333:1) is necessary. At the reduction necessary to achieve consolidation, full recrystallization is assured. A theoretical analysis using a simplified variational principle with experimentally derived spread data has been developed for a homogeneous material. An upper bound analysis of a single, centrally situated void has been shown to give good predictions of void closure with reduction and the reduction required for void closure for initial void area fractions 0.45%. A limited number of tests in the works has indicated compliance with the results for void closure obtained in the laboratory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit -the upper bound of regeneration efficiency -is derived. © 2014 Macmillan Publishers Limited. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consideration of the influence of test technique and data analysis method is important for data comparison and design purposes. The paper highlights the effects of replication interval, crack growth rate averaging and curve-fitting procedures on crack growth rate results for a Ni-base alloy. It is shown that an upper bound crack growth rate line is not appropriate for use in fatigue design, and that the derivative of a quadratic fit to the a vs N data looks promising. However, this type of averaging, or curve fitting, is not useful in developing an understanding of microstructure/crack tip interactions. For this purpose, simple replica-to-replica growth rate calculations are preferable. © 1988.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exponentially increasing demand on operational data rate has been met with technological advances in telecommunication systems such as advanced multilevel and multidimensional modulation formats, fast signal processing, and research into new different media for signal transmission. Since the current communication channels are essentially nonlinear, estimation of the Shannon capacity for modern nonlinear communication channels is required. This PhD research project has targeted the study of the capacity limits of different nonlinear communication channels with a view to enable a significant enhancement in the data rate of the currently deployed fiber networks. In the current study, a theoretical framework for calculating the Shannon capacity of nonlinear regenerative channels has been developed and illustrated on the example of the proposed here regenerative Fourier transform (RFT). Moreover, the maximum gain in Shannon capacity due to regeneration (that is, the Shannon capacity of a system with ideal regenerators – the upper bound on capacity for all regenerative schemes) is calculated analytically. Thus, we derived a regenerative limit to which the capacity of any regenerative system can be compared, as analogue of the seminal linear Shannon limit. A general optimization scheme (regenerative mapping) has been introduced and demonstrated on systems with different regenerative elements: phase sensitive amplifiers and the proposed here multilevel regenerative schemes: the regenerative Fourier transform and the coupled nonlinear loop mirror.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 1965 Levenshtein introduced the deletion correcting codes and found an asymptotically optimal family of 1-deletion correcting codes. During the years there has been a little or no research on t-deletion correcting codes for larger values of t. In this paper, we consider the problem of finding the maximal cardinality L2(n;t) of a binary t-deletion correcting code of length n. We construct an infinite family of binary t-deletion correcting codes. By computer search, we construct t-deletion codes for t = 2;3;4;5 with lengths n ≤ 30. Some of these codes improve on earlier results by Hirschberg-Fereira and Swart-Fereira. Finally, we prove a recursive upper bound on L2(n;t) which is asymptotically worse than the best known bounds, but gives better estimates for small values of n.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 34C07, secondary 34C08.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the design of nonlinear regenerative communication channels that have capacity above the classical Shannon capacity of the linear additive white Gaussian noise channel. The upper bound for regeneration efficiency is found and the asymptotic behavior of the capacity in the saturation regime is derived. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a multiuser multicarrier downlink communication system in which the base station (BS) employs a large number of antennas. By assuming frequency-division duplex operation, we provide a beam domain channel model as the number of BS antennas grows asymptotically large. With this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate before developing necessary conditions to asymptotically maximize the upper bound, with only statistical channel state information at the BS. Inspired by these conditions, we propose a beam division multiple access (BDMA) transmission scheme, where the BS communicates with users via different beams. For BDMA transmission, we design user scheduling to select users within non-overlapping beams, work out an optimal pilot design under a minimum mean square error criterion, and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. The proposed BDMA scheme reduces significantly the pilot overhead, as well as, the processing complexity at transceivers. Simulations demonstrate the high spectral efficiency of BDMA transmission and the advantages in the bit error rate performance of the proposed pilot sequences.