927 resultados para Error correction coding
Resumo:
We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. Theencoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length $n$, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalarquantizer of rate $R$ which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate $n^{-1/5}\log n$ as the length of the encoded sequence $n$ increases without bound.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.
Resumo:
Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.
Resumo:
Gene expression changes may underlie much of phenotypic evolution. The development of high-throughput RNA sequencing protocols has opened the door to unprecedented large-scale and cross-species transcriptome comparisons by allowing accurate and sensitive assessments of transcript sequences and expression levels. Here, we review the initial wave of the new generation of comparative transcriptomic studies in mammals and vertebrate outgroup species in the context of earlier work. Together with various large-scale genomic and epigenomic data, these studies have unveiled commonalities and differences in the dynamics of gene expression evolution for various types of coding and non-coding genes across mammalian lineages, organs, developmental stages, chromosomes and sexes. They have also provided intriguing new clues to the regulatory basis and phenotypic implications of evolutionary gene expression changes.
Resumo:
The turbot (Scophthalmus maximus) is a commercially valuable flatfish and one of the most promising aquaculture species in Europe. Two transcriptome 454-pyrosequencing runs were used in order to detect Single Nucleotide Polymorphisms (SNPs) in genesrelated to immune response and gonad differentiation. A total of 866 true SNPs were detected in 140 different contigs representing 262,093 bp as a whole. Only one true SNP was analyzed in each contig. One hundred and thirteen SNPs out of the 140 analyzed were feasible (genotyped), while Ш were polymorphic in a wild population. Transition/transversion ratio (1.354) was similar to that observed in other fish studies. Unbiased gene diversity (He) estimates ranged from 0.060 to 0.510 (mean = 0.351), minimum allele frequency (MAF) from 0.030 to 0.500 (mean = 0.259) and all loci were in Hardy-Weinberg equilibrium after Bonferroni correction. A large number of SNPs (49) were located in the coding region, 33 representing synonymous and 16 non-synonymous changes. Most SNP-containing genes were related to immune response and gonad differentiation processes, and could be candidates for functional changes leading to phenotypic changes. These markers will be useful for population screening to look for adaptive variation in wild and domestic turbot
Resumo:
CONTEXT: A passive knee-extension test has been shown to be a reliable method of assessing hamstring tightness, but this method does not take into account the potential effect of gravity on the tested leg. OBJECTIVE: To compare an original passive knee-extension test with 2 adapted methods including gravity's effect on the lower leg. DESIGN: Repeated measures. SETTING: Laboratory. PARTICIPANTS: 20 young track and field athletes (16.6 ± 1.6 y, 177.6 ± 9.2 cm, 75.9 ± 24.8 kg). INTERVENTION: Each subject was tested in a randomized order with 3 different methods: In the original one (M1), passive knee angle was measured with a standard force of 68.7 N (7 kg) applied proximal to the lateral malleolus. The second (M2) and third (M3) methods took into account the relative lower-leg weight (measured respectively by handheld dynamometer and anthropometrical table) to individualize the force applied to assess passive knee angle. MAIN OUTCOME MEASURES: Passive knee angles measured with video-analysis software. RESULTS: No difference in mean individualized applied force was found between M2 and M3, so the authors assessed passive knee angle only with M2. The mean knee angle was different between M1 and M2 (68.8 ± 12.4 vs 73.1 ± 10.6, P < .001). Knee angles in M1 and M2 were correlated (r = .93, P < .001). CONCLUSIONS: Differences in knee angle were found between the original passive knee-extension test and a method with gravity correction. M2 is an improved version of the original method (M1) since it minimizes the effect of gravity. Therefore, we recommend using it rather than M1.