949 resultados para k-Error linear complexity
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
This paper describes a fast integer sorting algorithm, herein referred as Bit-index sort, which is a non-comparison sorting algorithm for partial per-mutations, with linear complexity order in execution time. Bit-index sort uses a bit-array to classify input sequences of distinct integers, and exploits built-in bit functions in C compilers supported by machine hardware to retrieve the ordered output sequence. Results show that Bit-index sort outperforms in execution time to quicksort and counting sort algorithms. A parallel approach for Bit-index sort using two simultaneous threads is included, which obtains speedups up to 1.6.
Resumo:
In this paper we propose a novel method for shape analysis called HTS (Hough Transform Statistics), which uses statistics from Hough Transform space in order to characterize the shape of objects in digital images. Experimental results showed that the HTS descriptor is robust and presents better accuracy than some traditional shape description methods. Furthermore, HTS algorithm has linear complexity, which is an important requirement for content based image retrieval from large databases. © 2013 IEEE.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
INTRODUCTION: The accurate evaluation of error of measurement (EM) is extremely important as in growth studies as in clinical research, since there are usually quantitatively small changes. In any study it is important to evaluate the EM to validate the results and, consequently, the conclusions. Because of its extreme simplicity, the Dahlberg formula is largely used worldwide, mainly in cephalometric studies. OBJECTIVES: (I) To elucidate the formula proposed by Dahlberg in 1940, evaluating it by comparison with linear regression analysis; (II) To propose a simple methodology to analyze the results, which provides statistical elements to assist researchers in obtaining a consistent evaluation of the EM. METHODS: We applied linear regression analysis, hypothesis tests on its parameters and a formula involving the standard deviation of error of measurement and the measured values. RESULTS AND CONCLUSION: we introduced an error coefficient, which is a proportion related to the scale of observed values. This provides new parameters to facilitate the evaluation of the impact of random errors in the research final results.
Resumo:
State of the art methods for disparity estimation achieve good results for single stereo frames, but temporal coherence in stereo videos is often neglected. In this paper we present a method to compute temporally coherent disparity maps. We define an energy over whole stereo sequences and optimize their Conditional Random Field (CRF) distributions using mean-field approximation. We introduce novel terms for smoothness and consistency between the left and right views, and perform CRF optimization by fast, iterative spatio-temporal filtering with linear complexity in the total number of pixels. Our results rank among the state of the art while having significantly less flickering artifacts in stereo sequences.
Resumo:
This correspondence presents an efficient method for reconstructing a band-limited signal in the discrete domain from its crossings with a sine wave. The method makes it possible to design A/D converters that only deliver the crossing timings, which are then used to interpolate the input signal at arbitrary instants. Potentially, it may allow for reductions in power consumption and complexity in these converters. The reconstruction in the discrete domain is based on a recently-proposed modification of the Lagrange interpolator, which is readily implementable with linear complexity and efficiently, given that it re-uses known schemes for variable fractional-delay (VFD) filters. As a spin-off, the method allows one to perform spectral analysis from sine wave crossings with the complexity of the FFT. Finally, the results in the correspondence are validated in several numerical examples.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
Triassic turbidites of the Nanpanjiang basin of south China represent the most expansive and voluminous siliciclastic turbidite accumulation in south China. The Nanpanjiang basin occurs at a critical junction between the southern margin of the south China plate and the Indochina, Siamo and Sibumasu plates to the south and southwest. The Triassic Yangtze carbonate shelf and isolated carbonated platforms in the basin have been extensively studied, but silicilastic turbidites in the basin have received relatively little attention. Deciphering the facies, paleocurrent indicators and provenance of the Triassic turbidites is important for several reasons: it promises to help resolve the timing of plate collisions along suture zones bordering the basin to the south and southwest, it will enable evaluation of which suture zones and Precambrian massifs were source areas, and it will allow an evaluation of the impact of the siliciclastic flux on carbonate platform evolution within the basin. Turbidites in the basin include the Early Triassic Shipao Formation and the Middle-Late Triassic Baifeng, Xinyuan, Lanmu Bianyang and Laishike formations. Each ranges upward of 700 m and the thickest is nearly 3 km. The turbidites contain very-fine sand in the northern part of the basin whereas the central and southern parts of the basin also commonly contain fine and rarely medium sand size. Coarser sand sizes occur where paleocurrents are from the south, and in this area some turbidites exhibit complete bouma sequences with graded A divisions. Successions contain numerous alternations between mud-rich and sand-rich intervals with thickness trends corresponding to proximal/ distal fan components. Spectacularly preserved sedimentary structures enable robust evaluation of turbidite systems and paleocurrent analyses. Analysis of paleocurrent measurements indicates two major directions of sediment fill. The northern part of the basin was sourced primarily by the Jiangnan massif in the northeast, and the central and southern parts of the basin were sourced primarily from suture zones and the Yunkai massif to the south and southeast respectively. Sandstones of the Lower Triassic Shipao Fm. have volcaniclastic composition including embayed quartz and glass shards. Middle Triassic sandstones are moderately mature, matrix-rich, lithic wackes. The average QFL ratio from all point count samples is 54.1/18.1/27.8% and the QmFLt ratio is 37.8/ 18.1/ 44.1%. Lithic fragments are dominantly claystone and siltstone clasts and metasedimentary clasts such as quartz mica tectonite. Volcanic lithics are rare. Most samples fall in the recycled orogen field of QmFLt plots, indicating a relatively quartz and lithic rich composition consistent with derivation from Precambrian massifs such as the Jiangnan, and Yunkai. A few samples from the southwest part of the basin fall into the dissected arc field, indicating a somewhat more lithic and feldspar-rich composition consistent with derivation from a suture zone Analysis of detrial zircon populations from 17 samples collected across the basin indicate: (1) Several samples contain zircons with concordant ages greater than 3000 Ma, (2) there are widespread peaks across the basin at 1800 Ma and 2500, (3) a widespread 900 Ma population, (3) a widespread population of zircons at 440 Ma, and (5) a larger population of younger zircons about 250 Ma in the southwestern part which is replaced to the north and northwest by a somewhat older population around 260-290 Ma. The 900 Ma provenance fits derivation from the Jiangnan Massif, the 2500, 1800, and 440 Ma provenance fits the Yunkai massif, and the 250 Ma is consistent with convergence and arc development in suture zones bordering the basin on the south or southwest. Early siliciclastic turbidite flux, proximal to source areas impacted carbonate platform evolution by infilling the basin, reducing accommodation space, stabilizing carbonate platform margins and promoting margin progradation. Late arrival, in areas far from source areas caused margin aggradation over a starved basin, development of high relief aggradational escarpments and unstable scalloped margins.
Resumo:
The Self-shrinking p-adic cryptographic generator (SSPCG) is a fast software stream cipher. Improved cryptoanalysis of the SSPCG is introduced. This cryptoanalysis makes more precise the length of the period of the generator. The linear complexity and the cryptography resistance against most recently used attacks are invesigated. Then we discuss how such attacks can be avoided. The results show that the sequence generated by a SSPCG has a large period, large linear complexity and is stable against the cryptographic attacks. This gives the reason to consider the SSPSG as suitable for critical cryptographic applications in stream cipher encryption algorithms.
Resumo:
This paper describes a fast integer sorting algorithm, herein referred to as Bit-index sort, which does not use comparisons and is intended to sort partial permutations. Experimental results exhibit linear complexity order in execution time. Bit-index sort uses a bit-array to classify input sequences of distinct integers, and exploits built-in bit functions in C compilers, supported by machine hardware, to retrieve the ordered output sequence. Results show that Bit-index sort outperforms quicksort and counting sort algorithms when compared in their execution time. A parallel approach for Bit-index sort using two simultaneous threads is also included, which obtains further speedups of up to 1.6 compared to its sequential case.
Resumo:
In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.
Resumo:
We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper