973 resultados para Random Access


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers channel coding for the memoryless multiple-access channel with a given (possibly suboptimal) decoding rule. Non-asymptotic bounds on the error probability are given, and a cost-constrained random-coding ensemble is used to obtain an achievable error exponent. The achievable rate region recovered by the error exponent coincides with that of Lapidoth in the discrete memoryless case, and remains valid for more general alphabets. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an achievable second-order rate region for the discrete memoryless multiple-access channel. The result is obtained using a random-coding ensemble in which each user's codebook contains codewords of a fixed composition. It is shown that this ensemble performs at least as well as i.i.d. random coding in terms of second-order asymptotics, and an example is given where a strict improvement is observed. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World-Wide Web (WWW) services have grown to levels where significant delays are expected to happen. Techniques like pre-fetching are likely to help users to personalize their needs, reducing their waiting times. However, pre-fetching is only effective if the right documents are identified and if user's move is correctly predicted. Otherwise, pre-fetching will only waste bandwidth. Therefore, it is productive to determine whether a revisit will occur or not, before starting pre-fetching. In this paper we develop two user models that help determining user's next move. One model uses Random Walk approximation and the other is based on Digital Signal Processing techniques. We also give hints on how to use such models with a simple pre-fetching technique that we are developing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To serve asynchronous requests using multicast, two categories of techniques, stream merging and periodic broadcasting have been proposed. For sequential streaming access where requests are uninterrupted from the beginning to the end of an object, these techniques are highly scalable: the required server bandwidth for stream merging grows logarithmically as request arrival rate, and the required server bandwidth for periodic broadcasting varies logarithmically as the inverse of start-up delay. However, sequential access is inappropriate to model partial requests and client interactivity observed in various streaming access workloads. This paper analytically and experimentally studies the scalability of multicast delivery under a non-sequential access model where requests start at random points in the object. We show that the required server bandwidth for any protocols providing immediate service grows at least as the square root of request arrival rate, and the required server bandwidth for any protocols providing delayed service grows linearly with the inverse of start-up delay. We also investigate the impact of limited client receiving bandwidth on scalability. We optimize practical protocols which provide immediate service to non-sequential requests. The protocols utilize limited client receiving bandwidth, and they are near-optimal in that the required server bandwidth is very close to its lower bound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present the prediction and characterization of the emission output of a diffusive random laser remains a challenge, despite the variety of investigated materials and theoretical interpretations given up to now. Here, a new mode selection method, based on spatial filtering and ultrafast detection, which allows to separate individual lasing modes and follow their temporal evolution is presented. In particular, the work explores the random laser behavior of a ground powder of an organic-inorganic hybrid compound based on Rhodamine B incorporated into a di-ureasil host. The experimental approach gives direct access to the mode structure and dynamics, shows clear modal relaxation oscillations, and illustrates the lasing modes stochastic behavior of this diffusive scattering system. The effect of the excitation energy on its modal density is also investigated. Finally, imaging measurements reveal the dominant role of diffusion over amplification processes in this kind of unconventional lasers. (C) 2015 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this culminating experience was to investigate the relationships between healthcare utilization, insurance coverage, and socioeconomic characteristics of children with asthma along the Texas-Mexico Border. A secondary data analysis was conducted on cross-sectional data from the Texas Child Asthma Call-back Survey, a follow-up survey to the random digit dialed Behavior Risk Factor Surveillance Study (BRFSS) conducted between 2006-2009 ( n = 556 adults living in households with a child with asthma).^ The proportion of Hispanic children with asthma in Border areas of Texas was more than twice that of non-Border areas (84.8% vs. 28.8%). Parents in Border areas were less likely to have their own health insurance (OR = 0.251, 95% C.I. = 0.117-0.540) and less likely to complete the survey in English than Spanish (OR = 0.251 95% C.I. = 0.117-0.540) than parents in non-Border areas. No significant socio-economic or health care utilization differences were noted between Hispanic children living in Border areas compared to Hispanic children living in non-Border areas. Children with asthma along the Texas-Mexico Border, regardless of ethnicity and language, have insurance coverage rates, reported cost barriers to care, symptom management, and medication usage patterns similar to those in non-Border areas. When compared to English-speakers, Spanish-speaking parents in Texas as a whole are far less likely to be taught what to do during an asthma attack (50.2% vs. 78.6%).^ Language preference, rather than ethnicity or geographical residence, played a larger role on childhood asthma-related health disparities for children in Texas. Spanish-speaking parents in are less likely to receive adequate asthma self-management education. Investigating the effects of Hispanic acculturation rates and incongruent parent-child health insurance coverage may provide better insight into the health disparities of children along the Texas-Mexico Border.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein folding occurs on a time scale ranging from milliseconds to minutes for a majority of proteins. Computer simulation of protein folding, from a random configuration to the native structure, is nontrivial owing to the large disparity between the simulation and folding time scales. As an effort to overcome this limitation, simple models with idealized protein subdomains, e.g., the diffusion–collision model of Karplus and Weaver, have gained some popularity. We present here new results for the folding of a four-helix bundle within the framework of the diffusion–collision model. Even with such simplifying assumptions, a direct application of standard Brownian dynamics methods would consume 10,000 processor-years on current supercomputers. We circumvent this difficulty by invoking a special Brownian dynamics simulation. The method features the calculation of the mean passage time of an event from the flux overpopulation method and the sampling of events that lead to productive collisions even if their probability is extremely small (because of large free-energy barriers that separate them from the higher probability events). Using these developments, we demonstrate that a coarse-grained model of the four-helix bundle can be simulated in several days on current supercomputers. Furthermore, such simulations yield folding times that are in the range of time scales observed in experiments.