970 resultados para Improved sequential algebraic algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses devising a reliable model-based Harmonic-Aware Matching Pursuit (HAMP) for reconstructing sparse harmonic signals from their compressed samples. The performance guarantees of HAMP are provided; they illustrate that the introduced HAMP requires less data measurements and has lower computational cost compared with other greedy techniques. The complexity of formulating a structured sparse approximation algorithm is highlighted and the inapplicability of the conventional thresholding operator to the harmonic signal model is demonstrated. The harmonic sequential deletion algorithm is subsequently proposed and other sparse approximation methods are evaluated. The superior performance of HAMP is depicted in the presented experiments. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

分析了目前网络上最流行的BM算法及其改进算法BMH,在此基础上提出了BMH算法的改进算法BMH2。考虑了模式串自身的特征,在原有移动距离数组的基础上增加一个新的移动数组,从而充分利用模式串特征进行更大距离的移动,使算法获得更高的效率。实验证明,改进后的算法能够增加"坏字符"方法的右移量,有效地提高匹配速率。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

固定优先级任务可调度性判定是实时系统调度理论研究的核心问题之一.目前已有的各种判定方法可归结为两大类:多项式时间调度判定和确切性判定.多项式时间调度判定通常采用调度充分条件来进行,为此,许多理想条件下基于RM(rate monotonic)调度算法的CPU利用率最小上界被提了出来.确切性判定利用RM调度的充要条件,保证任何任务集均可被判定,并且判定结果是确切的.但是由于时间复杂度较差,确切性判定方法难以实现在线分析.提出了一种改进的RM可调度性判定方法(improved schedulability test algorithm,简称ISTA).首先介绍了任务调度空间这一概念,并提出了二叉树表示,然后进一步提出了相关的剪枝理论.在此基础上,研究了任务之间可调度性的相关性及其对判定任务集可调度性的影响,提出并证明了相关的定理.最后基于提出的定理,给出了一种改进的伪多项式时间可调度性判定算法,并与已有的判定方法进行了比较.仿真结果表明,该算法平均性能作为任务集内任务个数的函数具有显著提高.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

观察点设置问题是地形可视性分析中的一类重要问题,对该问题的研究可以在空间信息辅助决策、通信、旅游、野生动物保护等领域发挥重大作用。本文在对地形可视性分析中观察点设置问题现有研究成果总结和分析基础上对该问题展开深入研究。 首先,针对现有解决方法只从智能算法或地形数据表示方法单一角度进行分析和研究的局限性,提出了一种问题相关的智能算法和数据表示方法相结合的解决问题新框架。该框架考虑了解决观察点设置问题时智能算法的优点和数据表示方式的特点相互配合问题,目的是充分发挥二者各自的优势以提高观察点设置问题解决的准确度与效率。 其次,在深入分析观察点设置问题本身特点的基础上,结合隶属云理论的基本理论和方法,对经典模拟退火算法从退温函数设计、温度产生过程、状态生成过程三方面进行了问题相关的改进,提出了一种适于观察点设置问题的改进模拟退火算法(Improved Simulated Annealing algorithm, ISA)。该算法一方面保持了经典模拟退火算法的稳定倾向特性,保证了算法满足伴随退火温度的不断下降,对恶化的新状态越来越难于接受这一模拟退火算法的最基本特征;另一方面其退火温度的连续性随机变化特性和隐含的“回火升温”过程,则有利于算法有效拒绝恶化解,加速算法收敛,能够更好地满足观察点设置问题对于算法收敛速度的要求。 再次,在分析地形数据的精度、误差等因素对观察点设置问题的解决准确性和解决效率影响程度的基础上,提出了一种基于离散余弦变换的地形数据内插方法(Discrete Cosine Transformation Interpolation method, DCTI)。新方法将传统空域上的地形内插转换到变换域上进行,同时充分利用了离散余弦变换的熵保持特性和能量压缩特性,简化了变换域上的内插过程,提高了地形数据内插的效率和精度。DCTI方法与其他现有典型地形数据内插方法相比,对地形可视性信息获取的准确性和效率影响最小,为平衡观察点设置问题解决过程中时间效率和准确度之间的关系,最终有效地解决观察点设置问题提供了数据基础。 最后,从智能算法和地形数据相结合的角度出发,提出了一种基于ISA和DCTI相结合的观察点设置问题多分辨率处理方法(Multi-Resolution Processing method, MRP)。新方法将模拟退火算法的逐次退火特点和地形数据的多分辨率表示充分结合,达到了发挥算法数据相结合的综合优势的目的。与现有单纯基于模拟退火算法的解决方法相比,在问题解决准确度保持不变的前提下,基于MRP方法的观察点设置问题解决的平均耗时减少85%~95%,为实际工程应用问题的解决提供了一条重要途径。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

建立了塑料熔体在异型材挤出机头流道中的流动均匀性模型 ,阐明了模具结构参数与流动均匀性间的关系,确定了塑料挤出模优化设计的目标函数和设计变量 ,并将一种改进的模拟退火算法用于流线型塑料异型材挤出机头的优化设计。实例表明所用方法对挤出模优化设计的适用性,该方法在利用经验的同时 ,优化了模具结构参数 ,提高了模具设计的智能化

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Qinghai-Tibet Plateau lies in the place of the continent-continent collision between Indian and Eurasian plates. Because of their interaction the shallow and deep structures are very complicated. The force system forming the tectonic patterns and driving tectonic movements is effected together by the deep part of the lithosphere and the asthenosphere. It is important to study the 3-D velocity structures, the spheres and layers structures, material properties and states of the lithosphere and the asthenosphere for getting knowledge of their formation and evolution, dynamic process, layers coupling and exchange of material and energy. Based on the Rayleigh wave dispersion theory, we study the 3-D velocity structures, the depths of interfaces and thicknesses of different layers, including the crust, the lithosphere and the asthenosphere, the lithosphere-asthenosphere system in the Qinghai-Tibet Plateau and its adjacent areas. The following tasks include: (1)The digital seismic records of 221 seismic events have been collected, whose magnitudes are larger than 5.0 over the Qinghai-Tibet Plateau and its adjacent areas. These records come from 31 digital seismic stations of GSN , CDSN、NCDSN and part of Indian stations. After making instrument response calibration and filtering, group velocities of fundamental mode of Rayleigh waves are measured using the frequency-time analysis (FTAN) to get the observed dispersions. Furthermore, we strike cluster average for those similar ray paths. Finally, 819 dispersion curves (8-150s) are ready for dispersion inversion. (2)From these dispersion curves, pure dispersion data in 2°×2° cells of the areas (18°N-42°N, 70°E-106°E) are calculated by using function expansion method, proposed by Yanovskaya. The average initial model has been constructed by taking account of global AK135 model along with geodetic, geological, geophysical, receiving function and wide-angle reflection data. Then, initial S-wave velocity structures of the crust and upper mantle in the research areas have been obtained by using linear inversion (SVD) method. (3)Taking the results of the linear inversion as the initial model, we simultaneously invert the S wave velocities and thicknesses by using non-linear inversion (improved Simulated Annealing algorithm). Moreover, during the temperature dropping the variable-scale models are used. Comparing with the linear results, the spheres and layers by the non-linear inversion can be recognized better from the velocity value and offset. (4)The Moho discontinuity and top interface of the asthenosphere are recognized from the velocity value and offset of the layers. The thicknesses of the crust, lithosphere and asthenosphere are gained. These thicknesses are helpful to studying the structural differentia between the Qinghai-Tibet Plateau and its adjacent areas and among geologic units of the plateau. The results of the inversion will provide deep geophysical evidences for studying deep dynamical mechanism and exploring metal mineral resource and oil and gas resources. The following conclusions are reached by the distributions of the S wave velocities and thicknesses of the crust, lithosphere and asthenosphere, combining with previous researches. (1)The crust is very thick in the Qinghai-Tibet Plateau, varying from 60 km to 80 km. The lithospheric thickness in the Qinghai-Tibet Plateau is thinner (130-160 km) than its adjacent areas. Its asthenosphere is relatively thicker, varies from 150 km to 230 km, and the thickest area lies in the western Qiangtang. India located in south of Main Boundary thrust has a thinner crust (32-38 km), a thicker lithosphere of about 190 km and a rather thin asthenosphere of only 60 km. Sichuan and Tarim basins have the crust thickness less than 50km. Their lithospheres are thicker than the Qinghai-Tibet Plateau, and their asthenospheres are thinner. (2)The S-wave velocity variation pattern in the lithosphere-asthenosphere system has band-belted distribution along east-westward. These variations correlate with geology structures sketched by sutures and major faults. These sutures include Main Boundary thrust (MBT), Yarlung-Zangbo River suture (YZS), Bangong Lake-Nujiang suture (BNS), Jinshajiang suture (JSJS), Kunlun edge suture (KL). In the velocity maps of the upper and middle crust, these sutures can be sketched. In velocity maps of 250-300 km depth, MBT, BNS and JSJS can be sketched. In maps of the crustal thickness, the lithospheric thickness and the asthenospheric thickness, these sutures can be still sketched. In particular, MBT can be obviously resolved in these velocity maps and thickness maps. (3)Since the collision between India and Eurasian plate, the “loss” of surface material arising from crustal shortening is caused not only by crustal thickening but also by lateral extrusion material. The source of lateral extrusion lies in the Qiangtang block. These materials extrude along the JSJS and BNS with both rotation and dispersion in Daguaiwan. Finally, it extends toward southeast direction. (4)There is the crust-mantle transition zone of no distinct velocity jump in the lithosphere beneath the Qiangtang Terrane. It has thinner lithosphere and developed thicker asthenosphere. It implies that the crust-mantle transition zone of partial melting is connected with the developed asthenosphere. The underplating of asthenosphere may thin the lithosphere. This buoyancy might be the main mechanism and deep dynamics of the uplift of the Qinghai-Tibet hinterland. At the same time, the transport of hot material with low velocity intrudes into the upper mantle and the lower crust along cracks and faults forming the crust-mantle transition zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A methodology which allows a non-specialist to rapidly design silicon wavelet transform cores has been developed. This methodology is based on a generic architecture utilizing time-interleaved coefficients for the wavelet transform filters. The architecture is scaleable and it has been parameterized in terms of wavelet family, wavelet type, data word length and coefficient word length. The control circuit is designed in such a way that the cores can also be cascaded without any interface glue logic for any desired level of decomposition. This parameterization allows the use of any orthonormal wavelet family thereby extending the design space for improved transformation from algorithm to silicon. Case studies for stand alone and cascaded silicon cores for single and multi-stage analysis respectively are reported. The typical design time to produce silicon layout of a wavelet based system has been reduced by an order of magnitude. The cores are comparable in area and performance to hand-crafted designs. The designs have been captured in VHDL so they are portable across a range of foundries and are also applicable to FPGA and PLD implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In current constraint-based (Pearl-style) systems for discovering Bayesian networks, inputs with deterministic relations are prohibited. This restricts the applicability of these systems. In this paper, we formalize a sufficient condition under which Bayesian networks can be recovered even with deterministic relations. The sufficient condition leads to an improvement to Pearl’s IC algorithm; other constraint-based algorithms can be similarly improved. The new algorithm, assuming the sufficient condition proposed, is able to recover Bayesian networks with deterministic relations, and moreover suffers no loss of performance when applied to nondeterministic Bayesian networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detection of lane boundaries on suburban streets using images obtained from video constitutes a challenging task. This is mainly due to the difficulties associated with estimating the complex geometric structure of lane boundaries, the quality of lane markings as a result of wear, occlusions by traffic, and shadows caused by road-side trees and structures. Most of the existing techniques for lane boundary detection employ a single visual cue and will only work under certain conditions and where there are clear lane markings. Also, better results are achieved when there are no other onroad objects present. This paper extends our previous work and discusses a novel lane boundary detection algorithm specifically addressing the abovementioned issues through the integration of two visual cues. The first visual cue is based on stripe-like features found on lane lines extracted using a two-dimensional symmetric Gabor filter. The second visual cue is based on a texture characteristic determined using the entropy measure of the predefined neighbourhood around a lane boundary line. The visual cues are then integrated using a rulebased classifier which incorporates a modified sequential covering algorithm to improve robustness. To separate lane boundary lines from other similar features, a road mask is generated using road chromaticity values estimated from CIE L*a*b* colour transformation. Extraneous points around lane boundary lines are then removed by an outlier removal procedure based on studentized residuals. The lane boundary lines are then modelled with Bezier spline curves. To validate the algorithm, extensive experimental evaluation was carried out on suburban streets and the results are presented. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O algoritmo de simulação seqüencial estocástica mais amplamente utilizado é o de simulação seqüencial Gaussiana (ssG). Teoricamente, os métodos estocásticos reproduzem tão bem o espaço de incerteza da VA Z(u) quanto maior for o número L de realizações executadas. Entretanto, às vezes, L precisa ser tão alto que o uso dessa técnica pode se tornar proibitivo. Essa Tese apresenta uma estratégia mais eficiente a ser adotada. O algoritmo de simulação seqüencial Gaussiana foi alterado para se obter um aumento em sua eficiência. A substituição do método de Monte Carlo pela técnica de Latin Hypercube Sampling (LHS), fez com que a caracterização do espaço de incerteza da VA Z(u), para uma dada precisão, fosse alcançado mais rapidamente. A técnica proposta também garante que todo o modelo de incerteza teórico seja amostrado, sobretudo em seus trechos extremos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ferromagnetic and antiferromagnetic Ising model on a two dimensional inhomogeneous lattice characterized by two exchange constants (J1 and J2) is investigated. The lattice allows, in a continuous manner, the interpolation between the uniforme square (J2 = 0) and triangular (J2 = J1) lattices. By performing Monte Carlo simulation using the sequential Metropolis algorithm, we calculate the magnetization and the magnetic susceptibility on lattices of differents sizes. Applying the finite size scaling method through a data colappse, we obtained the critical temperatures as well as the critical exponents of the model for several values of the parameter α = J2 J1 in the [0, 1] range. The ferromagnetic case shows a linear increasing behavior of the critical temperature Tc for increasing values of α. Inwhich concerns the antiferromagnetic system, we observe a linear (decreasing) behavior of Tc, only for small values of α; in the range [0.6, 1], where frustrations effects are more pronunciated, the critical temperature Tc decays more quickly, possibly in a non-linear way, to the limiting value Tc = 0, cor-responding to the homogeneous fully frustrated antiferromagnetic triangular case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.