794 resultados para Thematic coding of visuals
Resumo:
This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the throughput improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). We prove that the per session throughput order with PNC is tightly bounded as T((nvmR (n))-1) if m = O(R-2 (n)), where n is the total number of nodes, R(n) is the communication range, and m is the number of destinations for each multicast session. We also show that per-session throughput order with PNC is tight bounded as T(n-1), when m = O(R-2(n)). The results of this paper imply that PNC cannot improve the throughput order of multicast in random WAHNs, which is different from the intuition that PNC may improve the throughput order as it allows simultaneous signal access and combination.
Resumo:
This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the capacity improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). While it can be shown that there is a capacity gain by PNC, we can prove that the per session throughput capacity with PNC is ? (nR(n))), where n is the total number of nodes, R(n) is the communication range, and each multicast session consists of a constant number of sinks. The result implies that PNC cannot improve the capacity order of multicast in random WAHNs, which is different from the intuition that PNC may improve the capacity order as it allows simultaneous signal reception and combination. Copyright © 2010 ACM.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
We present information-theory analysis of the tradeoff between bit-error rate improvement and the data-rate loss using skewed channel coding to suppress pattern-dependent errors in digital communications. Without loss of generality, we apply developed general theory to the particular example of a high-speed fiber communication system with a strong patterning effect. © 2007 IEEE.
Resumo:
We have investigated how optimal coding for neural systems changes with the time available for decoding. Optimization was in terms of maximizing information transmission. We have estimated the parameters for Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding with two, three and more quantization levels for larger decoding times. We postulate that the presence of subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme and we use the mammalian auditory system as an example.
Resumo:
The paper deals with methods of choice in the INTERNET of natural-language textual fragments relevant to a given theme. Relevancy is estimated on the basis of semantic analysis of sentences. Recognition of syntactic and semantic connections between words of the text is carried out by the analysis of combinations of inflections and prepositions, without use of categories and rules of traditional grammar. Choice in the INTERNET of the thematic information is organized cyclically with automatic forming of the new key at every cycle when addressing to the INTERNET.
Resumo:
In this paper, we demonstrate through computer simulation and experiment a novel subcarrier coding scheme combined with pre-electrical dispersion compensation (pre-EDC) for fiber nonlinearity mitigation in coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. As the frequency spacing in CO-OFDM systems is usually small (tens of MHz), neighbouring subcarriers tend to experience correlated nonlinear distortions after propagation over a fiber link. As a consequence, nonlinearity mitigation can be achieved by encoding and processing neighbouring OFDM subcarriers simultaneously. Herein, we propose to adopt the concept of dual phase conjugated twin wave for CO-OFDM transmission. Simulation and experimental results show that this simple technique combined with 50% pre-EDC can effectively offer up to 1.5 and 0.8 dB performance gains in CO-OFDM systems with BPSK and QPSK modulation formats, respectively.
Resumo:
This poster presentation from the May 2015 Florida Library Association Conference, along with the Everglades Explorer discovery portal at http://ee.fiu.edu, demonstrates how traditional bibliographic and curatorial principles can be applied to: 1) selection, cross-walking and aggregation of metadata linking end-users to wide-spread digital resources from multiple silos; 2) harvesting of select PDFs, HTML and media for web archiving and access; 3) selection of CMS domains, sub-domains and folders for targeted searching using an API. Choosing content for this discovery portal is comparable to past scholarly practice of creating and publishing subject bibliographies, except metadata and data are housed in relational databases. This new and yet traditional capacity coincides with: Growth of bibliographic utilities (MarcEdit); Evolution of open-source discovery systems (eXtensible Catalog); Development of target-capable web crawling and archiving systems (Archive-it); and specialized search APIs (Google). At the same time, historical and technical changes – specifically the increasing fluidity and re-purposing of syndicated metadata – make this possible. It equally stems from the expansion of freely accessible digitized legacy and born-digital resources. Innovation principles helped frame the process by which the thematic Everglades discovery portal was created at Florida International University. The path -- to providing for more effective searching and co-location of digital scientific, educational and historical material related to the Everglades -- is contextualized through five concepts found within Dyer and Christensen’s “The Innovator’s DNA: Mastering the five skills of disruptive innovators (2011). The project also aligns with Ranganathan’s Laws of Library Science, especially the 4th Law -- to "save the time of the user.”
Resumo:
I thank George Pandarakalam for research assistance; Hans-Jörg Rheinberger for hosting my stay at the Max Planck Institute for History of Science, Berlin; and Sahotra Sarkar and referees of this journal for offering detailed comments. Funded by the Wellcome Trust (WT098764MA).
Resumo:
I thank George Pandarakalam for research assistance; Hans-Jörg Rheinberger for hosting my stay at the Max Planck Institute for History of Science, Berlin; and Sahotra Sarkar and referees of this journal for offering detailed comments. Funded by the Wellcome Trust (WT098764MA).
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
A large proportion of the variation in traits between individuals can be attributed to variation in the nucleotide sequence of the genome. The most commonly studied traits in human genetics are related to disease and disease susceptibility. Although scientists have identified genetic causes for over 4,000 monogenic diseases, the underlying mechanisms of many highly prevalent multifactorial inheritance disorders such as diabetes, obesity, and cardiovascular disease remain largely unknown. Identifying genetic mechanisms for complex traits has been challenging because most of the variants are located outside of protein-coding regions, and determining the effects of such non-coding variants remains difficult. In this dissertation, I evaluate the hypothesis that such non-coding variants contribute to human traits and diseases by altering the regulation of genes rather than the sequence of those genes. I will specifically focus on studies to determine the functional impacts of genetic variation associated with two related complex traits: gestational hyperglycemia and fetal adiposity. At the genomic locus associated with maternal hyperglycemia, we found that genetic variation in regulatory elements altered the expression of the HKDC1 gene. Furthermore, we demonstrated that HKDC1 phosphorylates glucose in vitro and in vivo, thus demonstrating that HKDC1 is a fifth human hexokinase gene. At the fetal-adiposity associated locus, we identified variants that likely alter VEPH1 expression in preadipocytes during differentiation. To make such studies of regulatory variation high-throughput and routine, we developed POP-STARR, a novel high throughput reporter assay that can empirically measure the effects of regulatory variants directly from patient DNA. By combining targeted genome capture technologies with STARR-seq, we assayed thousands of haplotypes from 760 individuals in a single experiment. We subsequently used POP-STARR to identify three key features of regulatory variants: that regulatory variants typically have weak effects on gene expression; that the effects of regulatory variants are often coordinated with respect to disease-risk, suggesting a general mechanism by which the weak effects can together have phenotypic impact; and that nucleotide transversions have larger impacts on enhancer activity than transitions. Together, the findings presented here demonstrate successful strategies for determining the regulatory mechanisms underlying genetic associations with human traits and diseases, and value of doing so for driving novel biological discovery.