981 resultados para coding theory
Resumo:
We consider multicast flow problems where either all of the nodes or only a subset of the nodes may be in session. Traffic from each node in the session has to be sent to every other node in the session. If the session does not consist of all the nodes, the remaining nodes act as relays. The nodes are connected by undirected edges whose capacities are independent and identically distributed random variables. We study the asymptotics of the capacity region (with network coding) in the limit of a large number of nodes, and show that the normalized sum rate converges to a constant almost surely. We then provide a decentralized push-pull algorithm that asymptotically achieves this normalized sum rate.
Resumo:
Perfect space-time block codes (STBCs) are based on four design criteria-full-rateness, nonvanishing determinant, cubic shaping, and uniform average transmitted energy per antenna per time slot. Cubic shaping and transmission at uniform average energy per antenna per time slot are important from the perspective of energy efficiency of STBCs. The shaping criterion demands that the generator matrix of the lattice from which each layer of the perfect STBC is carved be unitary. In this paper, it is shown that unitariness is not a necessary requirement for energy efficiency in the context of space-time coding with finite input constellations, and an alternative criterion is provided that enables one to obtain full-rate (rate of complex symbols per channel use for an transmit antenna system) STBCs with larger normalized minimum determinants than the perfect STBCs. Further, two such STBCs, one each for 4 and 6 transmit antennas, are presented and they are shown to have larger normalized minimum determinants than the comparable perfect STBCs which hitherto had the best-known normalized minimum determinants.
Resumo:
We provide experimental evidence supporting the vectorial theory for determining electric field at and near the geometrical focus of a cylindrical lens. This theory provides precise distribution of field and its polarization effects. Experimental results show a close match (approximate to 95% using (2)-test) with the simulation results (obtained using vectorial theory). Light-sheet generated both at low and high NA cylindrical lens shows the importance of vectorial theory for further development of light-sheet techniques. Potential applications are in planar imaging systems (such as, SPIM, IML-SPIM, imaging cytometry) and spectroscopy. Microsc. Res. Tech. 77:105-109, 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
A balance between excitatory and inhibitory synaptic currents is thought to be important for several aspects of information processing in cortical neurons in vivo, including gain control, bandwidth and receptive field structure. These factors will affect the firing rate of cortical neurons and their reliability, with consequences for their information coding and energy consumption. Yet how balanced synaptic currents contribute to the coding efficiency and energy efficiency of cortical neurons remains unclear. We used single compartment computational models with stochastic voltage-gated ion channels to determine whether synaptic regimes that produce balanced excitatory and inhibitory currents have specific advantages over other input regimes. Specifically, we compared models with only excitatory synaptic inputs to those with equal excitatory and inhibitory conductances, and stronger inhibitory than excitatory conductances (i.e. approximately balanced synaptic currents). Using these models, we show that balanced synaptic currents evoke fewer spikes per second than excitatory inputs alone or equal excitatory and inhibitory conductances. However, spikes evoked by balanced synaptic inputs are more informative (bits/spike), so that spike trains evoked by all three regimes have similar information rates (bits/s). Consequently, because spikes dominate the energy consumption of our computational models, approximately balanced synaptic currents are also more energy efficient than other synaptic regimes. Thus, by producing fewer, more informative spikes approximately balanced synaptic currents in cortical neurons can promote both coding efficiency and energy efficiency.
Resumo:
In this work, we consider two-dimensional (2-D) binary channels in which the 2-D error patterns are constrained so that errors cannot occur in adjacent horizontal or vertical positions. We consider probabilistic and combinatorial models for such channels. A probabilistic model is obtained from a 2-D random field defined by Roth, Siegel and Wolf (2001). Based on the conjectured ergodicity of this random field, we obtain an expression for the capacity of the 2-D non-adjacent-errors channel. We also derive an upper bound for the asymptotic coding rate in the combinatorial model.
Resumo:
We present a nonequilibrium strong-coupling approach to inhomogeneous systems of ultracold atoms in optical lattices. We demonstrate its application to the Mott-insulating phase of a two-dimensional Fermi-Hubbard model in the presence of a trap potential. Since the theory is formulated self-consistently, the numerical implementation relies on a massively parallel evaluation of the self-energy and the Green's function at each lattice site, employing thousands of CPUs. While the computation of the self-energy is straightforward to parallelize, the evaluation of the Green's function requires the inversion of a large sparse 10(d) x 10(d) matrix, with d > 6. As a crucial ingredient, our solution heavily relies on the smallness of the hopping as compared to the interaction strength and yields a widely scalable realization of a rapidly converging iterative algorithm which evaluates all elements of the Green's function. Results are validated by comparing with the homogeneous case via the local-density approximation. These calculations also show that the local-density approximation is valid in nonequilibrium setups without mass transport.
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
Using van-der-Waals-corrected density functional theory calculations, we explore the possibility of engineering the local structure and morphology of high-surface-area graphene-derived materials to improve the uptake of methane and carbon dioxide for gas storage and sensing. We test the sensitivity of the gas adsorption energy to the introduction of native point defects, curvature, and the application of strain. The binding energy at topological point defect sites is inversely correlated with the number of missing carbon atoms, causing Stone-Wales defects to show the largest enhancement with respect to pristine graphene (similar to 20%). Improvements of similar magnitude are observed at concavely curved surfaces in buckled graphene sheets under compressive strain, whereas tensile strain tends to weaken gas binding. Trends for CO2 and CH4 are, similar, although CO2 binding is generally stronger by similar to 4 to 5 kJ mol(-1). However, the differential between the adsorption of CO2 and CH4 is much higher on folded graphene sheets and at concave curvatures; this could possibly be leveraged for CH4/CO2 flow separation and gasselective sensors.
Resumo:
Several time dependent fluorescence Stokes shift (TDFSS) experiments have reported a slow power law decay in the hydration dynamics of a DNA molecule. Such a power law has neither been observed in computer simulations nor in some other TDFSS experiments. Here we observe that a slow decay may originate from collective ion contribution because in experiments DNA is immersed in a buffer solution, and also from groove bound water and lastly from DNA dynamics itself. In this work we first express the solvation time correlation function in terms of dynamic structure factors of the solution. We use mode coupling theory to calculate analytically the time dependence of collective ionic contribution. A power law decay in seen to originate from an interplay between long-range probe-ion direct correlation function and ion-ion dynamic structure factor. Although the power law decay is reminiscent of Debye-Falkenhagen effect, yet solvation dynamics is dominated by ion atmosphere relaxation times at longer length scales (small wave number) than in electrolyte friction. We further discuss why this power law may not originate from water motions which have been computed by molecular dynamics simulations. Finally, we propose several experiments to check the prediction of the present theoretical work.
Resumo:
In this paper, we present a spectral finite element model (SFEM) using an efficient and accurate layerwise (zigzag) theory, which is applicable for wave propagation analysis of highly inhomogeneous laminated composite and sandwich beams. The theory assumes a layerwise linear variation superimposed with a global third-order variation across the thickness for the axial displacement. The conditions of zero transverse shear stress at the top and bottom and its continuity at the layer interfaces are subsequently enforced to make the number of primary unknowns independent of the number of layers, thereby making the theory as efficient as the first-order shear deformation theory (FSDT). The spectral element developed is validated by comparing the present results with those available in the literature. A comparison of the natural frequencies of simply supported composite and sandwich beams obtained by the present spectral element with the exact two-dimensional elasticity and FSDT solutions reveals that the FSDT yields highly inaccurate results for the inhomogeneous sandwich beams and thick composite beams, whereas the present element based on the zigzag theory agrees very well with the exact elasticity solution for both thick and thin, composite and sandwich beams. A significant deviation in the dispersion relations obtained using the accurate zigzag theory and the FSDT is also observed for composite beams at high frequencies. It is shown that the pure shear rotation mode remains always evanescent, contrary to what has been reported earlier. The SFEM is subsequently used to study wavenumber dispersion, free vibration and wave propagation time history in soft-core sandwich beams with composite faces for the first time in the literature. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a `footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by similar to 50% in generator potentials, to similar to 3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Resumo:
H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.
Resumo:
Three copper-azido complexes Cu-4(N-3)(8)(L-1)(2)(MeOH)(2)](n) (1), Cu-4(N-3)(8)(L-1)(2)] (2), and Cu-5(N-3)(10)(L-1)(2)](n) (3) L-1 is the imine resulting from the condensation of pyridine-2-carboxaldehyde with 2-(2-pyridyl)ethylamine] have been synthesized using lower molar equivalents of the Schiff base ligand with Cu(NO3)(2)center dot 3H(2)O and an excess of NaN3. Single crystal X-ray structures show that the basic unit of the complexes 1 and 2 contains Cu-4(II) building blocks; however, they have distinct basic and overall structures due to a small change in the bridging mode of the peripheral pair of copper atoms in the linear tetranudear structures. Interestingly, these changes are the result of changing the solvent system (MeOH/H2O to EtOH/H2O) used for the synthesis, without changing the proportions of the components (metal to ligand ratio 2:1). Using even lower proportions of the ligand, another unique complex was isolated with Cu-5(II) building units, forming a two-dimensional complex (3). Magnetic susceptibility measurements over a wide range of temperature exhibit the presence of both antiferromagnetic (very weak) and ferromagnetic exchanges within the tetranuclear unit structures. Density functional theory calculations (using B3LYP functional, and two different basis sets) have been performed on the complexes 1 and 2 to provide a qualitative theoretical interpretation of their overall magnetic behavior.
Resumo:
Regenerating codes and codes with locality are two coding schemes that have recently been proposed, which in addition to ensuring data collection and reliability, also enable efficient node repair. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. This paper presents results in two directions. In one, this paper extends the notion of codes with locality so as to permit local recovery of an erased code symbol even in the presence of multiple erasures, by employing local codes having minimum distance >2. An upper bound on the minimum distance of such codes is presented and codes that are optimal with respect to this bound are constructed. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. We derive an upper bound on the minimum distance of vector-alphabet codes with locality for the case when their constituent local codes have a certain uniform rank accumulation property. This property is possessed by both minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) codes. We provide several constructions of codes with local regeneration which achieve this bound, where the local codes are either MSR or MBR codes. Also included in this paper, is an upper bound on the minimum distance of a general vector code with locality as well as the performance comparison of various code constructions of fixed block length and minimum distance.