994 resultados para Weak Greedy Algorithms
Resumo:
Two biological aerated filters (BAF) were setup for ammonia removal treatment of the circulation water in a marine aquaculture. One of the BAFs was bioaugmented with a heterotrophic nitrifying bacterium, Lutimonas sp. H10, where the ammonia removal was not improved and the massive inoculation was even followed by a nitrification breakdown from day 9 to 18. The nitrification was remained stable in control BAF operated under the same conditions. Fluorescent in situ hybridization (FISH) with rRNA-targeted probes and cultivable method revealed that Lutimonas sp. H10 almost disappeared from the bioaugomented BAF within 3 d, and this was mainly due to the infection of a specific phage as revealed by flask experiment, plaque assay and transmission electron observation. Analyses of 16S rRNA gene libraries showed that bacterial groups from two reactors evolved differently and an overgrowth of protozoa was observed in the bioaugmented BAR Therefore, phage infection and poor biofilm forming ability of the inoculated strain are the main reasons for bioaugmentation failure. In addition, gazing by protozoa of the bacteria might be the reason for the nitrification breakdown in bioaugmented BAF during day 9-18.
Resumo:
许多问题最终可以归结为求解一个组合优化问题,GA是求解组合优化问题的一个强有力的工具,但遗传算法在应用中常出现收敛过慢和封闭竞争问题,本文提出贪心遗传算法。该算法的初始种群建立、交叉和变异等过程,都引入贪心选择策略指导搜索;移民操作向种群引进新的遗传物质,克服了封闭竞争缺点。贪心遗传算法可以避免早熟收敛并改进算法的性能,算法搜索起步阶段的效率是非常高的,本文通过TSP问题仿真试验证明了算法的有效性,在较少的计算量下,得到令人满意的结果。
Resumo:
As the largest and highest plateau on the Earth, the Tibetan Plateau has been a key location for understanding the processes of mountain building and plateau formation during India-Asia continent-continent collision. As the front-end of the collision, the geological structure of eastern Tibetan Plateau is very complex. It is ideal as a natural laboratory for investigating the formation and evolution of the Tibetan Plateau. Institute of Geophysics, Chinese Academy of Sciences (CAS) carried out MT survey from XiaZayii to Qingshuihe in the east part of the plateau in 1998. After error analysis and distortion analysis, the Non-linear Conjugate Gradient inversion(NLCG), Rapid Relaxation Inversin (RRI) and 2D OCCAM Inversion algorithms were used to invert the data. The three models obtained from 3 algorithms provided similar electrical structure and the NLCG model fit the observed data better than the other two models. According to the analysis of skin depth, the exploration depth of MT in Tibet is much more shallow than in stable continent. For example, the Schmucker depth at period 100s is less than 50km in Tibet, but more than 100km in Canadian Shield. There is a high conductivity layer at the depth of several kilometers beneath middle Qiangtang terrane, and almost 30 kilometers beneath northern Qiangtang terrane. The sensitivity analysis of the data predicates that the depth and resistivity of the crustal high conductivity layer are reliable. The MT results provide a high conductivity layer at 20~40km depth, where the seismic data show a low velocity zone. The experiments show that the rock will dehydrate and partially melt in the relative temperature and pressure. Fluids originated from dehydration and partial melting will seriously change rheological characteristics of rock. Therefore, This layer with low velocity and high conductivity layer in the crust is a weak layer. There is a low velocity path at the depth of 90-110 km beneath southeastern Tibetan Plateau and adjacent areas from seismology results. The analysis on the temperature and rheological property of the lithosphere show that the low velocity path is also weak. GPS measurements and the numerical simulation of the crust-mantle deformation show that the movement rate is different for different terranes. The regional strike derived from decomposition analysis for different frequency band and seismic anisotropy indicate that the crust and upper mantle move separately instead of as a whole. There are material flow in the eastern and southeastern Tibetan Plateau. Therefore, the faults, the crustal and upper mantle weak layers are three different boundaries for relatively movement. Those results support the "two layer wedge plates" geodynamic model on Tibetan formation and evolution.
Resumo:
Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.
Resumo:
Early and intermediate vision algorithms, such as smoothing and discontinuity detection, are often implemented on general-purpose serial, and more recently, parallel computers. Special-purpose hardware implementations of low-level vision algorithms may be needed to achieve real-time processing. This memo reviews and analyzes some hardware implementations of low-level vision algorithms. Two types of hardware implementations are considered: the digital signal processing chips of Ruetz (and Broderson) and the analog VLSI circuits of Carver Mead. The advantages and disadvantages of these two approaches for producing a general, real-time vision system are considered.
Resumo:
Model-based object recognition commonly involves using a minimal set of matched model and image points to compute the pose of the model in image coordinates. Furthermore, recognition systems often rely on the "weak-perspective" imaging model in place of the perspective imaging model. This paper discusses computing the pose of a model from three corresponding points under weak-perspective projection. A new solution to the problem is proposed which, like previous solutins, involves solving a biquadratic equation. Here the biquadratic is motivate geometrically and its solutions, comprised of an actual and a false solution, are interpreted graphically. The final equations take a new form, which lead to a simple expression for the image position of any unmatched model point.
Resumo:
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
Resumo:
Iron-substituted SBA-15 (Fe-SBA-15) materials have been synthesized via a simple direct hydrothermal method under weak acidic conditions. The powder X-ray diffraction (XRD), NZ sorption and transmission electron microscopy (TEM) characterizations show that the resultant materials have well-ordered hexagonal meso-structures. The diffused reflectance UV-vis and UV resonance Raman spectroscopy characterizations show that most of the iron ions exist as isolated framework species for calcined materials when the Fe/Si molar ratios are below 0.01 in the gel. The presence of iron species also has significant salt effects that can greatly improve the ordering of the mesoporous structure. Different iron species including isolated framework iron species, extraframework iron clusters and iron oxides are formed selectively by adjusting the pH values of the synthesis solutions and Fe/Si molar ratios. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Neal M J, Boyce D, Rowland J J, Lee M H, and Olivier P L. Robotic grasping by showing: an experimental comparison of two novel algorithms. In Proceedings of IFAC - SICICA'97, pages 345-350, Annecy, France, 1997.
Resumo:
M. Galea and Q. Shen. Simultaneous ant colony optimisation algorithms for learning linguistic fuzzy rules. A. Abraham, C. Grosan and V. Ramos (Eds.), Swarm Intelligence in Data Mining, pages 75-99.
Resumo:
R. Jensen and Q. Shen, 'Fuzzy-Rough Feature Significance for Fuzzy Decision Trees,' in Proceedings of the 2005 UK Workshop on Computational Intelligence, pp. 89-96, 2005.
Resumo:
The Google AdSense Program is a successful internet advertisement program where Google places contextual adverts on third-party websites and shares the resulting revenue with each publisher. Advertisers have budgets and bid on ad slots while publishers set reserve prices for the ad slots on their websites. Following previous modelling efforts, we model the program as a two-sided market with advertisers on one side and publishers on the other. We show a reduction from the Generalised Assignment Problem (GAP) to the problem of computing the revenue maximising allocation and pricing of publisher slots under a first-price auction. GAP is APX-hard but a (1-1/e) approximation is known. We compute truthful and revenue-maximizing prices and allocation of ad slots to advertisers under a second-price auction. The auctioneer's revenue is within (1-1/e) second-price optimal.
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.