951 resultados para Upper bound estimate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A review of published literature was made to establish the fundamental aspects of rolling and allow an experimental programme to be planned. Simulated hot rolling tests, using pure lead as a model material, were performed on a laboratory mill to obtain data on load and torque when rolling square section stock. Billet metallurgy and consolidation of representative defects was studied when modelling the rolling of continuously cast square stock with a view to determining optimal reduction schedules that would result in a product having properties to the high level found in fully wrought billets manufactured from large ingots. It is difficult to characterize sufficiently the complexity of the porous central region in a continuously cast billet for accurate modelling. However, holes drilled into a lead billet prior to rolling was found to be a good means of assessing central void consolidation in the laboratory. A rolling schedule of 30% (1.429:1) per pass to a total of 60% (2.5:1) will give a homogeneous, fully recrystallized product. To achieve central consolidation, a total reduction of approximately 70% (3.333:1) is necessary. At the reduction necessary to achieve consolidation, full recrystallization is assured. A theoretical analysis using a simplified variational principle with experimentally derived spread data has been developed for a homogeneous material. An upper bound analysis of a single, centrally situated void has been shown to give good predictions of void closure with reduction and the reduction required for void closure for initial void area fractions 0.45%. A limited number of tests in the works has indicated compliance with the results for void closure obtained in the laboratory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Economic factors such as the rise in cost of raw materials, labour and power, are compelling manufacturers of cold-drawn polygonal sections, to seek new production routes which will enable the expansion in the varieties of metals used and the inclusion of difficult-to-draw materials. One such method generating considerable industrial interest is the drawing of polygonal sections from round at elevated temperature. The technique of drawing mild steel, medium carbon steel and boron steel wire into octagonal, hexagonal and square sections from round at up to 850 deg C and 50% reduction of area in one pass has been established. The main objective was to provide a basic understanding of the process, with particular emphasis being placed on modelling using both experimental and theoretical considerations. Elevated temperature stress-strain data was obtained using a modified torsion testing machine. Data were used in the upper bound solution derived and solved numerically to predict drawing stress strain, strain-rate, temperature and flow stress distribution in the deforming zone for a range of variables. The success of this warm working process will, of course, depend on the use of a satisfactory elevated temperature lubricant, an efficient cooling system, a suitable tool material having good wear and thermal shock resistance and an efficient die profile design which incorporates the principle of least work. The merits and demerits of die materials such as tungsten carbide, chromium carbide, Syalon and Stellite are discussed, principally from the standpoint of minimising drawing force and die wear. Generally, the experimental and theoretical results were in good agreement, the drawing stress could be predicted within close limits and the process proved to be technically feasible. Finite element analysis has been carried out on the various die geometries and die materials, to gain a greater understanding of the behaviour of these dies under the process of elevated temperature drawing, and to establish the temperature distribution and thermal distortion in the deforming zone, thus establishing the optimum die design and die material for the process. It is now possible to predict, for the materials already tested, (i) the optimum drawing temperature range, (ii) the maximum possible reduction of area per pass, (iii) the optimum drawing die profiles and die materials, (iv) the most efficient lubricant in terms of reducing the drawing force and die wear.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit -the upper bound of regeneration efficiency -is derived. © 2014 Macmillan Publishers Limited. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consideration of the influence of test technique and data analysis method is important for data comparison and design purposes. The paper highlights the effects of replication interval, crack growth rate averaging and curve-fitting procedures on crack growth rate results for a Ni-base alloy. It is shown that an upper bound crack growth rate line is not appropriate for use in fatigue design, and that the derivative of a quadratic fit to the a vs N data looks promising. However, this type of averaging, or curve fitting, is not useful in developing an understanding of microstructure/crack tip interactions. For this purpose, simple replica-to-replica growth rate calculations are preferable. © 1988.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exponentially increasing demand on operational data rate has been met with technological advances in telecommunication systems such as advanced multilevel and multidimensional modulation formats, fast signal processing, and research into new different media for signal transmission. Since the current communication channels are essentially nonlinear, estimation of the Shannon capacity for modern nonlinear communication channels is required. This PhD research project has targeted the study of the capacity limits of different nonlinear communication channels with a view to enable a significant enhancement in the data rate of the currently deployed fiber networks. In the current study, a theoretical framework for calculating the Shannon capacity of nonlinear regenerative channels has been developed and illustrated on the example of the proposed here regenerative Fourier transform (RFT). Moreover, the maximum gain in Shannon capacity due to regeneration (that is, the Shannon capacity of a system with ideal regenerators – the upper bound on capacity for all regenerative schemes) is calculated analytically. Thus, we derived a regenerative limit to which the capacity of any regenerative system can be compared, as analogue of the seminal linear Shannon limit. A general optimization scheme (regenerative mapping) has been introduced and demonstrated on systems with different regenerative elements: phase sensitive amplifiers and the proposed here multilevel regenerative schemes: the regenerative Fourier transform and the coupled nonlinear loop mirror.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the statistics of optical data transmission in a noisy nonlinear fiber channel with a weak dispersion management and zero average dispersion. Applying analytical expressions for the output probability density functions both for a nonlinear channel and for a linear channel with additive and multiplicative noise we calculate in a closed form a lower bound estimate on the Shannon capacity for an arbitrary signal-to-noise ratio.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 1965 Levenshtein introduced the deletion correcting codes and found an asymptotically optimal family of 1-deletion correcting codes. During the years there has been a little or no research on t-deletion correcting codes for larger values of t. In this paper, we consider the problem of finding the maximal cardinality L2(n;t) of a binary t-deletion correcting code of length n. We construct an infinite family of binary t-deletion correcting codes. By computer search, we construct t-deletion codes for t = 2;3;4;5 with lengths n ≤ 30. Some of these codes improve on earlier results by Hirschberg-Fereira and Swart-Fereira. Finally, we prove a recursive upper bound on L2(n;t) which is asymptotically worse than the best known bounds, but gives better estimates for small values of n.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the design of nonlinear regenerative communication channels that have capacity above the classical Shannon capacity of the linear additive white Gaussian noise channel. The upper bound for regeneration efficiency is found and the asymptotic behavior of the capacity in the saturation regime is derived. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a multiuser multicarrier downlink communication system in which the base station (BS) employs a large number of antennas. By assuming frequency-division duplex operation, we provide a beam domain channel model as the number of BS antennas grows asymptotically large. With this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate before developing necessary conditions to asymptotically maximize the upper bound, with only statistical channel state information at the BS. Inspired by these conditions, we propose a beam division multiple access (BDMA) transmission scheme, where the BS communicates with users via different beams. For BDMA transmission, we design user scheduling to select users within non-overlapping beams, work out an optimal pilot design under a minimum mean square error criterion, and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. The proposed BDMA scheme reduces significantly the pilot overhead, as well as, the processing complexity at transceivers. Simulations demonstrate the high spectral efficiency of BDMA transmission and the advantages in the bit error rate performance of the proposed pilot sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUCTION: Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. METHODS: We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. RESULTS: There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. CONCLUSION: This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.