940 resultados para Graph Colourings


Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文简要地介绍了数控自动编程专家系统.其中包括:专家系统知识表示的形式;分层次的黑板结构;前向推理求解策略和相应的解释功能;系统针对不同类型的曲线组合,采用不同的独立的知识源(KS)进行处理.由于在知识的处理上采用编码技术,在前向推理求解策略中使用启发信息和“剪技”技术,提高了系统的时空效率.系统中的规划程序能自动规划切削路径.输出供数控车床使用的 NC 代码,并可在显示屏上进行图形显示和切削仿真.目前原型系统已经在 IBM-PC 和 Sun3/60计算机上利用FORTRAN 语言实现.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

基于PC和多轴运动控制器的开放式数控系统是理想的开放式数控系统。介绍了基于PMAC的开放式数控系统结构形式,PMAC的差补、位置控制、伺服功能、以PMAC和PC机为硬件平台搭建了数控系统,并对其硬件构成和软件设计结构进行了分析。着重从软件设计的角度,介绍了PTALK控件的功能和作用,对数控系统软件构成进行了详细的阐述。并设计出了友好的用户界面,在实际应用中具有重要意义。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Second Round of Oil & Gas Exploration needs more precision imaging method, velocity vs. depth model and geometry description on Complicated Geological Mass. Prestack time migration on inhomogeneous media was the technical basic of velocity analysis, prestack time migration on Rugged surface, angle gather and multi-domain noise suppression. In order to realize this technique, several critical technical problems need to be solved, such as parallel computation, velocity algorithm on ununiform grid and visualization. The key problem is organic combination theories of migration and computational geometry. Based on technical problems of 3-D prestack time migration existing in inhomogeneous media and requirements from nonuniform grid, parallel process and visualization, the thesis was studied systematically on three aspects: Infrastructure of velocity varies laterally Green function traveltime computation on ununiform grid, parallel computational of kirchhoff integral migration and 3D visualization, by combining integral migration theory and Computational Geometry. The results will provide powerful technical support to the implement of prestack time migration and convenient compute infrastructure of wave number domain simulation in inhomogeneous media. The main results were obtained as follows: 1. Symbol of one way wave Lie algebra integral, phase and green function traveltime expressions were analyzed, and simple 2-D expression of Lie algebra integral symbol phase and green function traveltime in time domain were given in inhomogeneous media by using pseudo-differential operators’ exponential map and Lie group algorithm preserving geometry structure. Infrastructure calculation of five parts, including derivative, commutating operator, Lie algebra root tree, exponential map root tree and traveltime coefficients , was brought forward when calculating asymmetry traveltime equation containing lateral differential in 3-D by this method. 2. By studying the infrastructure calculation of asymmetry traveltime in 3-D based on lateral velocity differential and combining computational geometry, a method to build velocity library and interpolate on velocity library using triangulate was obtained, which fit traveltime calculate requirements of parallel time migration and velocity estimate. 3. Combining velocity library triangulate and computational geometry, a structure which was convenient to calculate differential in horizontal, commutating operator and integral in vertical was built. Furthermore, recursive algorithm, for calculating architecture on lie algebra integral and exponential map root tree (Magnus in Math), was build and asymmetry traveltime based on lateral differential algorithm was also realized. 4. Based on graph theory and computational geometry, a minimum cycle method to decompose area into polygon blocks, which can be used as topological representation of migration result was proposed, which provided a practical method to block representation and research to migration interpretation results. 5. Based on MPI library, a process of bringing parallel migration algorithm at arbitrary sequence traces into practical was realized by using asymmetry traveltime based on lateral differential calculation and Kirchhoff integral method. 6. Visualization of geological data and seismic data were studied by the tools of OpenGL and Open Inventor, based on computational geometry theory, and a 3D visualize system on seismic imaging data was designed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Population research is a front area concerned by domestic and overseas, especially its researches on its spatial visualization and its geo-visualization system design, which provides a sound base for understanding and analysis of the regional difference in population distribution and its spatial rules. With the development of GIS, the theory of geo-visualization more and more plays an important role in many research fields, especially in population information visualization, and has been made the big achievements recently. Nevertheless, the current research is less attention paid to the system design for statistical-geo visualization for population information. This paper tries to explore the design theories and methodologies for statistical-geo-visualization system for population information. The researches are mainly focused on the framework, the methodologies and techniques for the system design and construction. The purpose of the research is developed a platform for population atlas by the integration of the former owned copy software of the research group in statistical mapping system. As a modern tool, the system will provide a spatial visual environment for user to analyze the characteristics of population distribution and differentiate the interrelations of the population components. Firstly, the paper discusses the essentiality of geo-visualization for population information and brings forward the key issue in statistical-geo visualization system design based on the analysis of inland and international trends. Secondly, the geo-visualization system for population design, including its structure, functionality, module, user interface design, is studied based on the concepts of theory and technology of geo-visualization. The system design is proposed and further divided into three parts: support layer, technical layer, user layer. The support layer is a basic operation module and main part of the system. The technical layer is a core part of the system, supported by database and function modules. The database module mainly include the integrated population database (comprises spatial data, attribute data and geographical features information), the cartographic symbol library, the color library, the statistical analysis model. The function module of the system consists of thematic map maker component, statistical graph maker component, database management component and statistical analysis component. The user layer is an integrated platform, which provides the functions to design and implement a visual interface for user to query, analysis and management the statistic data and the electronic map. Based on the above, China's E-atlas for population was designed and developed by the integration of the national fifth census data with 1:400 million scaled spatial data. The atlas illustrates the actual development level of the population nowadays in China by about 200 thematic maps relating with 10 map categories(environment, population distribution, sex and age, immigration, nation, family and marriage, birth, education, employment, house). As a scientific reference tool, China's E-atlas for population has already received the high evaluation after published in early 2005. Finally, the paper makes the deep analysis of the sex ratio in China, to show how to use the functions of the system to analyze the specific population problem and how to make the data mining. The analysis results showed that: 1. The sex ratio has been increased in many regions after fourth census in 1990 except the cities in the east region, and the high sex ratio is highly located in hilly and low mountain areas where with the high illiteracy rate and the high poor rate; 2. The statistical-geo visualization system is a powerful tool to handle population information, which can be used to reflect the regional differences and the regional variations of population in China and indicate the interrelations of the population with other environment factors. Although the author tries to bring up a integrate design frame of the statistical-geo visualization system, there are still many problems needed to be resolved with the development of geo-visualization studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a new learning problem: learning a graph by piecemeal search, in which the learner must return every so often to its starting point (for refueling, say). We present two linear-time piecemeal-search algorithms for learning city-block graphs: grid graphs with rectangular obstacles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "trade-off" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost of matching features from one shape to the features of the other often reveals how similar the two shapes are. However, due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the Earth Mover's Distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search. We demonstrate our shape matching method on databases of 10,000 images of human figures and 60,000 images of handwritten digits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a constant-factor approximation algorithm for computing an embedding of the shortest path metric of an unweighted graph into a tree, that minimizes the multiplicative distortion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We constructed a parallelizing compiler that utilizes partial evaluation to achieve efficient parallel object code from very high-level data independent source programs. On several important scientific applications, the compiler attains parallel performance equivalent to or better than the best observed results from the manual restructuring of code. This is the first attempt to capitalize on partial evaluation's ability to expose low-level parallelism. New static scheduling techniques are used to utilize the fine-grained parallelism of the computations. The compiler maps the computation graph resulting from partial evaluation onto the Supercomputer Toolkit, an eight VLIW processor parallel computer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes a program which automatically characterizes the behavior of any driven, nonlinear, electrical circuit. To do this, the program autonomously selects interesting input parameters, drives the circuit, measures its response, performs a set of numeric computations on the measured data, interprets the results, and decomposes the circuit's parameter space into regions of qualitatively distinct behavior. The output is a two-dimensional portrait summarizing the high-level, qualitative behavior of the circuit for every point in the graph, an accompanying textual explanation describing any interesting patterns observed in the diagram, and a symbolic description of the circuit's behavior which can be passed on to other programs for further analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This report describes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures, even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes research about flow graphs - labeled, directed, acyclic graphs which abstract representations used in a variety of Artificial Intelligence applications. Flow graphs may be derived from flow grammars much as strings may be derived from string grammars; this derivation process forms a useful model for the stepwise refinement processes used in programming and other engineering domains. The central result of this report is a parsing algorithm for flow graphs. Given a flow grammar and a flow graph, the algorithm determines whether the grammar generates the graph and, if so, finds all possible derivations for it. The author has implemented the algorithm in LISP. The intent of this report is to make flow-graph parsing available as an analytic tool for researchers in Artificial Intelligence. The report explores the intuitions behind the parsing algorithm, contains numerous, extensive examples of its behavior, and provides some guidance for those who wish to customize the algorithm to their own uses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a theory of human-like reasoning in the general domain of designed physical systems, and in particular, electronic circuits. One aspect of the theory, causal analysis, describes how the behavior of individual components can be combined to explain the behavior of composite systems. Another aspect of the theory, teleological analysis, describes how the notion that the system has a purpose can be used to aid this causal analysis. The theory is implemented as a computer program, which, given a circuit topology, can construct by qualitative causal analysis a mechanism graph describing the functional topology of the system. This functional topology is then parsed by a grammar for common circuit functions. Ambiguities are introduced into the analysis by the approximate qualitative nature of the analysis. For example, there are often several possible mechanisms which might describe the circuit's function. These are disambiguated by teleological analysis. The requirement that each component be assigned an appropriate purpose in the functional topology imposes a severe constraint which eliminates all the ambiguities. Since both analyses are based on heuristics, the chosen mechanism is a rationalization of how the circuit functions, and does not guarantee that the circuit actually does function. This type of coarse understanding of circuits is useful for analysis, design and troubleshooting.