967 resultados para quantum bound on the LW heavy particle mass
Resumo:
The Tokai to Kamioka (T2K) long-baseline neutrino experiment consists of a muon neutrino beam, produced at the J-PARC accelerator, a near detector complex and a large 295 km distant far detector. The present work utilizes the T2K event timing measurements at the near and far detectors to study neutrino time of flight as function of derived neutrino energy. Under the assumption of a relativistic relation between energy and time of flight, constraints on the neutrino rest mass can be derived. The sub-GeV neutrino beam in conjunction with timing precision of order tens of ns provide sensitivity to neutrino mass in the few MeV/c^2 range. We study the distribution of relative arrival times of muon and electron neutrino candidate events at the T2K far detector as a function of neutrino energy. The 90% C.L. upper limit on the mixture of neutrino mass eigenstates represented in the data sample is found to be m^2 < 5.6 MeV^2/c^4.
Resumo:
In this work, we use large eddy simulations (LES) and Lagrangian tracking to study the influence of gravity on particle statistics in a fully developed turbulent upward/downward flow in a vertical channel and pipe at matched Kàrmàn number. Only drag and gravity are considered in the equation of motion for solid particles, which are assumed to have no influence on the flow field. Particle interactions with the wall are fully elastic. Our findings obtained from the particle statistics confirm that: (i) the gravity seems to modify both the quantitative and qualitative behavior of the particle distribution and statistics of the particle velocity in wall normal direction; (ii) however, only the quantitative behavior of velocity particle in streamwise direction and the root mean square of velocity components is modified; (iii) the statistics of fluid and particles coincide very well near the wall in channel and pipe flow with equal Kàrmàn number; (iv) pipe curvature seems to have quantitative and qualitative influence on the particle velocity and on the particle concentration in wall normal direction.
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.
Resumo:
Beginning with the ‘frog-leg experiment’ by Galvani (1786), followed by the demonstrations of Volta pile by Volta (1792) and lead-acid accumulator by Plante´ (1859), several battery chemistries have been developed and realized commercially. The development of lithium-ion rechargeable battery in the early 1990s is a breakthrough in the science and technology of batteries. Owing to its high energy density and high operating voltage, the Li-ion battery has become the battery of choice for various portable applications such as note-book computers, cellular telephones, camcorders, etc. Huge efforts are underway in succeeding the development of large size batteries for electric vehicle applications. The origin of lithium-ion battery lies in the discovery that Li+-ions can reversibly be intercalated into/de-intercalated from the Van der Walls gap between graphene sheets of carbon materials at a potential close to the Li/Li+ electrode. By employing carbon as the negative electrode material in rechargeable lithium-ion batteries, the problems associated with metallic lithium in rechargeable lithium batteries have been mitigated. Complimentary investigations on intercalation compounds based on transition metals have resulted in establishing LiCoO2 as the promising cathode material. By employing carbon and LiCoO2, respectively, as the negative and positive electrodes in a non-aqueous lithium-salt electrolyte,a Li-ion cell with a voltage value of about 3.5 V has resulted.Subsequent to commercialization of Li-ion batteries, a number of research activities concerning various aspects of the battery components began in several laboratories across the globe. Regarding the positive electrode materials, research priorities have been to develop different kinds of active materials concerning various aspects such as safety, high capacity, low cost, high stability with long cycle-life, environmental compatibility,understanding relationships between crystallographic and electrochemical properties. The present review discusses the published literature on different positive electrode materials of Li-ion batteries, with a focus on the effect of particle size on electrochemical performance.
Resumo:
Motivated by the viscosity bound in gauge/gravity duality, we consider the ratio of shear viscosity (eta) to entropy density (s) in black hole accretion flows. We use both an ideal gas equation of state and the QCD equation of state obtained from lattice for the fluid accreting onto a Kerr black hole. The QCD equation of state is considered since the temperature of accreting matter is expected to approach 10(12) K in certain hot flows. We find that in both the cases eta/s is small only for primordial black holes and several orders of magnitude larger than any known fluid for stellar and supermassive black holes. We show that a lower bound on the mass of primordial black holes leads to a lower bound on eta/s and vice versa. Finally we speculate that the Shakura-Sunyaev viscosity parameter should decrease with increasing density and/or temperatures. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Nine hydrographic cruises were performed on the Gulf of Lion continental margin between June 1993 and July 1996. These observations are analysed to quantify the fluxes of particulate matter and organic carbon transported along the slope by the Northern Current and to characterise their seasonal variability. Concentration of particulate matter and organic carbon are derived from light-transmission data and water sample analyses. The circulation is estimated from the geostrophic current field. The uncertainty on the transport estimate, related to the error on the prediction of particle concentrations from light-transmission data and the error on velocities, is assessed. The particulate matter inflow entering the Gulf of Lion off Marseille is comparable to the Rhône River input and varies seasonally with a maximum transport between autumn and spring. These modifications result from variations of the water flux rather than variations of the particulate matter concentration. Residual transports of particulate matter and organic carbon across the entire Gulf of Lion are calculated for two cruises enclosing the domain that were performed in February 1995 and July 1996. The particulate matter budgets indicate a larger export from the shelf to deep ocean in February 1995 (110 ± 20 kg/s) than in July 1996 (25 ± 18 kg/s). Likewise, the mean particulate organic carbon export is 12.8 ± 0.5 kg/s in February 1995 and 0.8 ± 0.2 kg/s in July 1996. This winter increase is due to larger allochthonous and autochthonous inputs and also to enhanced shelf-slope exchange processes, in particular the cascading of cold water from the shelf. The export of particulate matter by the horizontal currents is moreover two orders of magnitude larger than the vertical particulate fluxes measured at the same time with sediment traps on the continental slope.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
In this paper, we are concerned with low-complexity detection in large multiple-input multiple-output (MIMO) systems with tens of transmit/receive antennas. Our new contributions in this paper are two-fold. First, we propose a low-complexity algorithm for large-MIMO detection based on a layered low-complexity local neighborhood search. Second, we obtain a lower bound on the maximum-likelihood (ML) bit error performance using the local neighborhood search. The advantages of the proposed ML lower bound are i) it is easily obtained for MIMO systems with large number of antennas because of the inherent low complexity of the search algorithm, ii) it is tight at moderate-to-high SNRs, and iii) it can be tightened at low SNRs by increasing the number of symbols in the neighborhood definition. Interestingly, the proposed detection algorithm based on the layered local search achieves bit error performances which are quite close to this lower bound for large number of antennas and higher-order QAM. For e. g., in a 32 x 32 V-BLAST MIMO system, the proposed detection algorithm performs close to within 1.7 dB of the proposed ML lower bound at 10(-3) BER for 16-QAM (128 bps/Hz), and close to within 4.5 dB of the bound for 64-QAM (192 bps/Hz).
Resumo:
Diversity embedded space time codes are high rate codes that are designed such that they have a high diversity code embedded within them. A recent work by Diggavi and Tse characterizes the performance limits that can be achieved by diversity embedded space-time codes in terms of the achievable Diversity Multiplexing Tradeoff (DMT). In particular, they have shown that the trade off is successively refinable for rayleigh fading channels with one degree of freedom using superposition coding and Successive Interference Cancellation (SIC). However, for Multiple-Input Multiple-Output (MIMO) channels, the questions of successive refinability remains open. We consider MIMO Channels under superposition coding and SIC. We derive an upper bound on the successive refinement characteristics of the DMT. We then construct explicit space time codes that achieve the derived upper bound. These codes, constructed from cyclic division algebras, have minimal delay. Our results establish that when the channel has more than one degree of freedom, the DMT is not successive refinable using superposition coding and SIC. The channels considered in this work can have arbitrary fading statistics.
Resumo:
This work derives inner and outer bounds on the generalized degrees of freedom (GDOF) of the K-user symmetric MIMO Gaussian interference channel. For the inner bound, an achievable GDOF is derived by employing a combination of treating interference as noise, zero-forcing at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, depending on the number of antennas and the INR/SNR level. An outer bound on the GDOF is derived, using a combination of the notion of cooperation and providing side information to the receivers. Several interesting conclusions are drawn from the bounds. For example, in terms of the achievable GDOF in the weak interference regime, when the number of transmit antennas (M) is equal to the number of receive antennas (N), treating interference as noise performs the same as the HK scheme and is GDOF optimal. For K >; N/M+1, a combination of the HK and IA schemes performs the best among the schemes considered. However, for N/M <; K ≤ N/M+1, the HK scheme is found to be GDOF optimal.
Resumo:
We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
This paper derives outer bounds on the sum rate of the K-user MIMO Gaussian interference channel (GIC). Three outer bounds are derived, under different assumptions of cooperation and providing side information to receivers. The novelty in the derivation lies in the careful selection of side information, which results in the cancellation of the negative differential entropy terms containing signal components, leading to a tractable outer bound. The overall outer bound is obtained by taking the minimum of the three outer bounds. The derived bounds are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the generalized degrees of freedom (GDOF). The relative performance of the bounds yields insight into the performance limits of multiuser MIMO GICs and the relative merits of different schemes for interference management. These insights are confirmed by establishing the optimality of the bounds in specific cases using an inner bound on the GDOF derived by the authors in a previous work. It is also shown that many of the existing results on the GDOF of the GIC can be obtained as special cases of the bounds, e. g., by setting K = 2 or the number of antennas at each user to 1.
Resumo:
The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.
Resumo:
We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = Y Y T , where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a low-rank solution. The factorization X = Y Y T leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second-order optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. © 2010 Society for Industrial and Applied Mathematics.