882 resultados para Interval coding
Resumo:
Recent studies have shown the importance of the beat-by-beat changes in heart rate influenced by the autonomic nervous system (ANS), or heart rate variability (HRV). The purpose of this study was to examine the lasting effects of hypoxic exercise on HRV, and its influences on substrate usage. Results from this study could lead an increased understanding on this topic. Eight active healthy males (age: 31±11 years; height: 180±7 cm; weight: 83±8 kg; VO₂max (maximal oxygen consumption): 4.4±0.6 L•min⁻¹) underwent normoxic and hypoxic (FᵢO₂= 0.15) conditions during high-intensity interval (HIIT) cycling (70%-high interval, 35%-rest interval). Cycling intensity was determined by a peak power output cycling test. Each experimental session consisted of a basal metabolic rate determination, up to 45-minutes of HIIT cycling, and three 30-minute post-exercise metabolic rate measurements (spanning 3 hours and 15 minutes after exercise). During exercise, RPE was higher (p<0.01) and LAC (lactate) increased (p=0.001) at each point of time in hypoxia, with no change in normoxia. After hypoxic exercise, the SNS/PNS ratio (overall ANS activity) was significantly higher (p<0.01) and significantly decreased through time in both conditions (p<0.01). In addition, a significant interaction between time and conditions (p<0.02) showed a decrease in LAC concentration through time post-hypoxic exercise. The findings showed that a single bout of hypoxic exercise alters ANS activity post-exercise along with shifting substrate partitioning from glycolytic to lipolytic energy production. The significant decrease in LAC concentration post-hypoxic exercise supports the notion that hypoxic HIIT induces a greater muscle glycogen depletion leading to increased fat oxidation to sustain glycogenesis and gluconeogenesis to maintain blood glucose level during recovery.
Resumo:
The last glacial millennial climatic events (i.e. Dansgaard-Oeschger and Heinrich events) constitute outstanding case studies of coupled atmosphere-ocean-cryosphere interactions. Here, we investigate the evolution of sea-surface and subsurface conditions, in terms of temperature, salinity and sea ice cover, at very high-resolution (mean resolution between 55 and 155 years depending on proxies) during the 35-41 ka cal BP interval covering three Dansgaard-Oeschger cycles and including Heinrich event 4, in a new unpublished marine record, i.e. the MD99-2285 core (62.69°N; -3.57s°E). We use a large panel of complementary tools, which notably includes dinocyst-derived sea-ice cover duration quantifications. The high temporal resolution and multiproxy approach of this work allows us to identify the sequence of processes and to assess ocean-cryosphere interactions occurring during these periodic ice-sheet collapse events. Our results evidence a paradoxical hydrological scheme where (i) Greenland interstadials are marked by a homogeneous and cold upper water column, with intensive winter sea ice formation and summer sea ice melting, and (ii) Greenland and Heinrich stadials are characterized by a very warm and low saline surface layer with iceberg calving and reduced sea ice formation, separated by a strong halocline from a less warm and saltier subsurface layer. Our work also suggests that this stadial surface/subsurface warming started before massive iceberg release, in relation with warm Atlantic water advection. These findings thus support the theory that upper ocean warming might have triggered European ice-sheet destabilization. Besides, previous paleoceanographic studies conducted along the Atlantic inflow pathways close to the edge of European ice-sheets suggest that such a feature might have occurred in this whole area. Nonetheless, additional high resolution paleoreconstructions are required to confirm such a regional scheme.
Resumo:
I thank George Pandarakalam for research assistance; Hans-Jörg Rheinberger for hosting my stay at the Max Planck Institute for History of Science, Berlin; and Sahotra Sarkar and referees of this journal for offering detailed comments. Funded by the Wellcome Trust (WT098764MA).
Resumo:
Acknowledgments We thank A. B. Duthie, S. Losdat, and M. E. Wolak for useful discussions and comments and the European Research Council for funding. We also thank the two reviewers, whose suggestions greatly improved the manuscript.
Resumo:
I thank George Pandarakalam for research assistance; Hans-Jörg Rheinberger for hosting my stay at the Max Planck Institute for History of Science, Berlin; and Sahotra Sarkar and referees of this journal for offering detailed comments. Funded by the Wellcome Trust (WT098764MA).
Resumo:
Acknowledgments We thank A. B. Duthie, S. Losdat, and M. E. Wolak for useful discussions and comments and the European Research Council for funding. We also thank the two reviewers, whose suggestions greatly improved the manuscript.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
This dissertation presents a study and experimental research on asymmetric coding of stereoscopic video. A review on 3D technologies, video formats and coding is rst presented and then particular emphasis is given to asymmetric coding of 3D content and performance evaluation methods, based on subjective measures, of methods using asymmetric coding. The research objective was de ned to be an extension of the current concept of asymmetric coding for stereo video. To achieve this objective the rst step consists in de ning regions in the spatial dimension of auxiliary view with di erent perceptual relevance within the stereo pair, which are identi ed by a binary mask. Then these regions are encoded with better quality (lower quantisation) for the most relevant ones and worse quality (higher quantisation) for the those with lower perceptual relevance. The actual estimation of the relevance of a given region is based on a measure of disparity according to the absolute di erence between views. To allow encoding of a stereo sequence using this method, a reference H.264/MVC encoder (JM) has been modi ed to allow additional con guration parameters and inputs. The nal encoder is still standard compliant. In order to show the viability of the method subjective assessment tests were performed over a wide range of objective qualities of the auxiliary view. The results of these tests allow us to prove 3 main goals. First, it is shown that the proposed method can be more e cient than traditional asymmetric coding when encoding stereo video at higher qualities/rates. The method can also be used to extend the threshold at which uniform asymmetric coding methods start to have an impact on the subjective quality perceived by the observers. Finally the issue of eye dominance is addressed. Results from stereo still images displayed over a short period of time showed it has little or no impact on the proposed method.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
The roles of long non-coding RNAs (lncRNAs) in regulating cancer and stem cells are being increasingly appreciated. Its diverse mechanisms provide the regulatory network with a bigger repertoire to increase complexity. Here we report a novel LncRNA, Lnc34a, that is enriched in colon cancer stem cells (CCSCs) and initiates asymmetric division by directly targeting the microRNA miR-34a to cause its spatial imbalance. Lnc34a recruits Dnmt3a via PHB2 and HDAC1 to methylate and deacetylate the miR-34a promoter simultaneously, hence epigenetically silencing miR-34a expression independent of its upstream regulator, p53. Lnc34a levels affect CCSC self-renewal and colorectal cancer (CRC) growth in xenograft models. Lnc34a is upregulated in late-stage CRCs, contributing to epigenetic miR-34a silencing and CRC proliferation. The fact that lncRNA targets microRNA highlights the regulatory complexity of non-coding RNAs (ncRNAs), which occupy the bulk of the genome.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
A large proportion of the variation in traits between individuals can be attributed to variation in the nucleotide sequence of the genome. The most commonly studied traits in human genetics are related to disease and disease susceptibility. Although scientists have identified genetic causes for over 4,000 monogenic diseases, the underlying mechanisms of many highly prevalent multifactorial inheritance disorders such as diabetes, obesity, and cardiovascular disease remain largely unknown. Identifying genetic mechanisms for complex traits has been challenging because most of the variants are located outside of protein-coding regions, and determining the effects of such non-coding variants remains difficult. In this dissertation, I evaluate the hypothesis that such non-coding variants contribute to human traits and diseases by altering the regulation of genes rather than the sequence of those genes. I will specifically focus on studies to determine the functional impacts of genetic variation associated with two related complex traits: gestational hyperglycemia and fetal adiposity. At the genomic locus associated with maternal hyperglycemia, we found that genetic variation in regulatory elements altered the expression of the HKDC1 gene. Furthermore, we demonstrated that HKDC1 phosphorylates glucose in vitro and in vivo, thus demonstrating that HKDC1 is a fifth human hexokinase gene. At the fetal-adiposity associated locus, we identified variants that likely alter VEPH1 expression in preadipocytes during differentiation. To make such studies of regulatory variation high-throughput and routine, we developed POP-STARR, a novel high throughput reporter assay that can empirically measure the effects of regulatory variants directly from patient DNA. By combining targeted genome capture technologies with STARR-seq, we assayed thousands of haplotypes from 760 individuals in a single experiment. We subsequently used POP-STARR to identify three key features of regulatory variants: that regulatory variants typically have weak effects on gene expression; that the effects of regulatory variants are often coordinated with respect to disease-risk, suggesting a general mechanism by which the weak effects can together have phenotypic impact; and that nucleotide transversions have larger impacts on enhancer activity than transitions. Together, the findings presented here demonstrate successful strategies for determining the regulatory mechanisms underlying genetic associations with human traits and diseases, and value of doing so for driving novel biological discovery.
Resumo:
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.
This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.
Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.
Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.
Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.
Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.