938 resultados para k-Error linear complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

State of the art methods for disparity estimation achieve good results for single stereo frames, but temporal coherence in stereo videos is often neglected. In this paper we present a method to compute temporally coherent disparity maps. We define an energy over whole stereo sequences and optimize their Conditional Random Field (CRF) distributions using mean-field approximation. We introduce novel terms for smoothness and consistency between the left and right views, and perform CRF optimization by fast, iterative spatio-temporal filtering with linear complexity in the total number of pixels. Our results rank among the state of the art while having significantly less flickering artifacts in stereo sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This correspondence presents an efficient method for reconstructing a band-limited signal in the discrete domain from its crossings with a sine wave. The method makes it possible to design A/D converters that only deliver the crossing timings, which are then used to interpolate the input signal at arbitrary instants. Potentially, it may allow for reductions in power consumption and complexity in these converters. The reconstruction in the discrete domain is based on a recently-proposed modification of the Lagrange interpolator, which is readily implementable with linear complexity and efficiently, given that it re-uses known schemes for variable fractional-delay (VFD) filters. As a spin-off, the method allows one to perform spectral analysis from sine wave crossings with the complexity of the FFT. Finally, the results in the correspondence are validated in several numerical examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Triassic turbidites of the Nanpanjiang basin of south China represent the most expansive and voluminous siliciclastic turbidite accumulation in south China. The Nanpanjiang basin occurs at a critical junction between the southern margin of the south China plate and the Indochina, Siamo and Sibumasu plates to the south and southwest. The Triassic Yangtze carbonate shelf and isolated carbonated platforms in the basin have been extensively studied, but silicilastic turbidites in the basin have received relatively little attention. Deciphering the facies, paleocurrent indicators and provenance of the Triassic turbidites is important for several reasons: it promises to help resolve the timing of plate collisions along suture zones bordering the basin to the south and southwest, it will enable evaluation of which suture zones and Precambrian massifs were source areas, and it will allow an evaluation of the impact of the siliciclastic flux on carbonate platform evolution within the basin. Turbidites in the basin include the Early Triassic Shipao Formation and the Middle-Late Triassic Baifeng, Xinyuan, Lanmu Bianyang and Laishike formations. Each ranges upward of 700 m and the thickest is nearly 3 km. The turbidites contain very-fine sand in the northern part of the basin whereas the central and southern parts of the basin also commonly contain fine and rarely medium sand size. Coarser sand sizes occur where paleocurrents are from the south, and in this area some turbidites exhibit complete bouma sequences with graded A divisions. Successions contain numerous alternations between mud-rich and sand-rich intervals with thickness trends corresponding to proximal/ distal fan components. Spectacularly preserved sedimentary structures enable robust evaluation of turbidite systems and paleocurrent analyses. Analysis of paleocurrent measurements indicates two major directions of sediment fill. The northern part of the basin was sourced primarily by the Jiangnan massif in the northeast, and the central and southern parts of the basin were sourced primarily from suture zones and the Yunkai massif to the south and southeast respectively. Sandstones of the Lower Triassic Shipao Fm. have volcaniclastic composition including embayed quartz and glass shards. Middle Triassic sandstones are moderately mature, matrix-rich, lithic wackes. The average QFL ratio from all point count samples is 54.1/18.1/27.8% and the QmFLt ratio is 37.8/ 18.1/ 44.1%. Lithic fragments are dominantly claystone and siltstone clasts and metasedimentary clasts such as quartz mica tectonite. Volcanic lithics are rare. Most samples fall in the recycled orogen field of QmFLt plots, indicating a relatively quartz and lithic rich composition consistent with derivation from Precambrian massifs such as the Jiangnan, and Yunkai. A few samples from the southwest part of the basin fall into the dissected arc field, indicating a somewhat more lithic and feldspar-rich composition consistent with derivation from a suture zone Analysis of detrial zircon populations from 17 samples collected across the basin indicate: (1) Several samples contain zircons with concordant ages greater than 3000 Ma, (2) there are widespread peaks across the basin at 1800 Ma and 2500, (3) a widespread 900 Ma population, (3) a widespread population of zircons at 440 Ma, and (5) a larger population of younger zircons about 250 Ma in the southwestern part which is replaced to the north and northwest by a somewhat older population around 260-290 Ma. The 900 Ma provenance fits derivation from the Jiangnan Massif, the 2500, 1800, and 440 Ma provenance fits the Yunkai massif, and the 250 Ma is consistent with convergence and arc development in suture zones bordering the basin on the south or southwest. Early siliciclastic turbidite flux, proximal to source areas impacted carbonate platform evolution by infilling the basin, reducing accommodation space, stabilizing carbonate platform margins and promoting margin progradation. Late arrival, in areas far from source areas caused margin aggradation over a starved basin, development of high relief aggradational escarpments and unstable scalloped margins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Self-shrinking p-adic cryptographic generator (SSPCG) is a fast software stream cipher. Improved cryptoanalysis of the SSPCG is introduced. This cryptoanalysis makes more precise the length of the period of the generator. The linear complexity and the cryptography resistance against most recently used attacks are invesigated. Then we discuss how such attacks can be avoided. The results show that the sequence generated by a SSPCG has a large period, large linear complexity and is stable against the cryptographic attacks. This gives the reason to consider the SSPSG as suitable for critical cryptographic applications in stream cipher encryption algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a fast integer sorting algorithm, herein referred to as Bit-index sort, which does not use comparisons and is intended to sort partial permutations. Experimental results exhibit linear complexity order in execution time. Bit-index sort uses a bit-array to classify input sequences of distinct integers, and exploits built-in bit functions in C compilers, supported by machine hardware, to retrieve the ordered output sequence. Results show that Bit-index sort outperforms quicksort and counting sort algorithms when compared in their execution time. A parallel approach for Bit-index sort using two simultaneous threads is also included, which obtains further speedups of up to 1.6 compared to its sequential case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Formulation of a 16-term error model, based on the four-port ABCD-matrix and voltage and current variables, is outlined. Matrices A, B, C, and D are each 2 x 2 submatrices of the complete 4 x 4 error matrix. The corresponding equations are linear in terms of the error parameters, which simplifies the calibration process. The parallelism with the network analyzer calibration procedures and the requirement of five two-port calibration measurements are stressed. Principles for robust choice of equations are presented. While the formulation is suitable for any network analyzer measurement, it is expected to be a useful alternative for the nonlinear y-parameter approach used in intrinsic semiconductor electrical and noise parameter measurements and parasitics' deembedding.