935 resultados para Upper bound method
Resumo:
The exponentially increasing demand on operational data rate has been met with technological advances in telecommunication systems such as advanced multilevel and multidimensional modulation formats, fast signal processing, and research into new different media for signal transmission. Since the current communication channels are essentially nonlinear, estimation of the Shannon capacity for modern nonlinear communication channels is required. This PhD research project has targeted the study of the capacity limits of different nonlinear communication channels with a view to enable a significant enhancement in the data rate of the currently deployed fiber networks. In the current study, a theoretical framework for calculating the Shannon capacity of nonlinear regenerative channels has been developed and illustrated on the example of the proposed here regenerative Fourier transform (RFT). Moreover, the maximum gain in Shannon capacity due to regeneration (that is, the Shannon capacity of a system with ideal regenerators – the upper bound on capacity for all regenerative schemes) is calculated analytically. Thus, we derived a regenerative limit to which the capacity of any regenerative system can be compared, as analogue of the seminal linear Shannon limit. A general optimization scheme (regenerative mapping) has been introduced and demonstrated on systems with different regenerative elements: phase sensitive amplifiers and the proposed here multilevel regenerative schemes: the regenerative Fourier transform and the coupled nonlinear loop mirror.
Resumo:
In 1965 Levenshtein introduced the deletion correcting codes and found an asymptotically optimal family of 1-deletion correcting codes. During the years there has been a little or no research on t-deletion correcting codes for larger values of t. In this paper, we consider the problem of finding the maximal cardinality L2(n;t) of a binary t-deletion correcting code of length n. We construct an infinite family of binary t-deletion correcting codes. By computer search, we construct t-deletion codes for t = 2;3;4;5 with lengths n ≤ 30. Some of these codes improve on earlier results by Hirschberg-Fereira and Swart-Fereira. Finally, we prove a recursive upper bound on L2(n;t) which is asymptotically worse than the best known bounds, but gives better estimates for small values of n.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.
Resumo:
2000 Mathematics Subject Classification: Primary 34C07, secondary 34C08.
Resumo:
We present the design of nonlinear regenerative communication channels that have capacity above the classical Shannon capacity of the linear additive white Gaussian noise channel. The upper bound for regeneration efficiency is found and the asymptotic behavior of the capacity in the saturation regime is derived. © 2013 IEEE.
Resumo:
Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.
Resumo:
The present research concentrates on the fabrication of bulk aluminum matrix nanocomposite structures with carbon nanotube reinforcement. The objective of the work was to fabricate and characterize multi-walled carbon nanotube (MWCNT) reinforced hypereutectic Al-Si (23 wt% Si, 2 wt% Ni, 1 wt% Cu, rest Al) nanocomposite bulk structure with nanocrystalline matrix through thermal spray forming techniques viz. plasma spray forming (PSF) and high velocity oxy-fuel (HVOF) spray forming. This is the first research study, which has shown that thermal spray forming can be successfully used to synthesize carbon nanotube reinforced nanocomposites. Microstructural characterization based on quantitative microscopy, scanning and transmission electron microscopy (SEM and TEM), energy dispersive spectroscopy (EDS), X-ray diffraction (XRD), Raman spectroscopy and X ray photoelectron spectroscopy (XPS) confirms (i) retention and macro/sub-macro level homogenous distribution of multiwalled carbon nanotubes in the Al-Si matrix and (ii) evolution of nanostructured grains in the matrix. Formation of ultrathin β-SiC layer on MWCNT surface, due to chemical reaction of Si atoms diffusing from Al-Si alloy and C atoms from the outer walls of MWCNTs has been confirmed theoretically and experimentally. The presence of SiC layer at the interface improves the wettability and the interfacial adhesion between the MWCNT reinforcement and the Al-Si matrix. Sintering of the as-sprayed nanocomposites was carried out in an inert environment for further densification. As-sprayed PSF nanocomposite showed lower microhardness compared to HVOF, due to the higher porosity content and lower residual stress. The hardness of the nanocomposites increased with sintering time due to effective pore removal. Uniaxial tensile test on CNT-bulk nanocomposite was carried out, which is the first ever study of such nature. The tensile test results showed inconsistency in the data attributed to inhomogeneous microstructure and limitation of the test samples geometry. The elastic moduli of nanocomposites were computed using different micromechanics models and compared with experimentally measured values. The elastic moduli of nanocomposites measured by nanoindentation technique, increased gradually with sintering attributed to porosity removal. The experimentally measured values conformed better with theoretically predicted values, particularly in the case of Hashin-Shtrikman bound method.
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.
Resumo:
We study a multiuser multicarrier downlink communication system in which the base station (BS) employs a large number of antennas. By assuming frequency-division duplex operation, we provide a beam domain channel model as the number of BS antennas grows asymptotically large. With this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate before developing necessary conditions to asymptotically maximize the upper bound, with only statistical channel state information at the BS. Inspired by these conditions, we propose a beam division multiple access (BDMA) transmission scheme, where the BS communicates with users via different beams. For BDMA transmission, we design user scheduling to select users within non-overlapping beams, work out an optimal pilot design under a minimum mean square error criterion, and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. The proposed BDMA scheme reduces significantly the pilot overhead, as well as, the processing complexity at transceivers. Simulations demonstrate the high spectral efficiency of BDMA transmission and the advantages in the bit error rate performance of the proposed pilot sequences.
Resumo:
INTRODUCTION: Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. METHODS: We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. RESULTS: There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. CONCLUSION: This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.
Resumo:
Digital rock physics combines modern imaging with advanced numerical simulations to analyze the physical properties of rocks -- In this paper we suggest a special segmentation procedure which is applied to a carbonate rock from Switzerland -- Starting point is a CTscan of a specimen of Hauptmuschelkalk -- The first step applied to the raw image data is a nonlocal mean filter -- We then apply different thresholds to identify pores and solid phases -- Because we are aware of a nonneglectable amount of unresolved microporosity we also define intermediate phases -- Based on this segmentation determine porositydependent values for the pwave velocity and for the permeability -- The porosity measured in the laboratory is then used to compare our numerical data with experimental data -- We observe a good agreement -- Future work includes an analytic validation to the numerical results of the pwave velocity upper bound, employing different filters for the image segmentation and using data with higher resolution
Resumo:
This thesis deals with quantifying the resilience of a network of pavements. Calculations were carried out by modeling network performance under a set of possible damage-meteorological scenarios with known probability of occurrence. Resilience evaluation was performed a priori while accounting for optimal preparedness decisions and additional response actions that can be taken under each of the scenarios. Unlike the common assumption that the pre-event condition of all system components is uniform, fixed, and pristine, component condition evolution was incorporated herein. For this purpose, the health of the individual system components immediately prior to hazard event impact, under all considered scenarios, was associated with a serviceability rating. This rating was projected to reflect both natural deterioration and any intermittent improvements due to maintenance. The scheme was demonstrated for a hypothetical case study involving Laguardia Airport. Results show that resilience can be impacted by the condition of the infrastructure elements, their natural deterioration processes, and prevailing maintenance plans. The findings imply that, in general, upper bound values are reported in ordinary resilience work, and that including evolving component conditions is of value.
Resumo:
In this work, we study a version of the general question of how well a Haar-distributed orthogonal matrix can be approximated by a random Gaussian matrix. Here, we consider a Gaussian random matrix (Formula presented.) of order n and apply to it the Gram–Schmidt orthonormalization procedure by columns to obtain a Haar-distributed orthogonal matrix (Formula presented.). If (Formula presented.) denotes the vector formed by the first m-coordinates of the ith row of (Formula presented.) and (Formula presented.), our main result shows that the Euclidean norm of (Formula presented.) converges exponentially fast to (Formula presented.), up to negligible terms. To show the extent of this result, we use it to study the convergence of the supremum norm (Formula presented.) and we find a coupling that improves by a factor (Formula presented.) the recently proved best known upper bound on (Formula presented.). Our main result also has applications in Quantum Information Theory.
Resumo:
In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.