974 resultados para Locally Nilpotent Derivations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the improving of mantle convection theory, the developing of computing method and increasing of the measurement data, we can numerically simulate more clearly about the effects on some geophysical observed phenomenons such as the global heat flow and global lithospheric stress field in the Earth's surface caused by mantle convection, which is the primary mechanism for the transport of heat from the Earth's deep interior to its surface and the underlying force mechanism of dynamics in the Earth.Chapter 1 reviews the historical background and present research state of mantle convection theory.In Chapter 2, the basic conception of thermal convection and the basic theory about mantle flow.The effects on generation and distribution of global lithospheric stres s field induced by mantle flow are the subject of Chapter 3. Mantle convection causes normal stress and tangential stresses at the bottom of the lithosphere, and then the sublithospheric stress field induces the lithospheric deformation as sixrface force and results in the stress field within the lithosphere. The simulation shows that the agreement between predictions and observations is good in most regions. Most of subduction zones and continental collisions are under compressive. While ocean ridges, such as the east Pacific ridge, the Atlantic ridge and the east African rift valley, are under tensile. And most of the hotspots preferentially occur in regions where calculated stress is tensile. The calculated directions of the most compressive principal horizontal stress are largely in accord with that of the observation except for some regions such as the NW-Pacifie subduction zone and Qinghai-Tibet Plateau, in which the directions of the most compressive principal horizontal stress are different. It shows that the mantel flow plays an important role in causing or affecting the large-scale stress field within the lithosphere.The global heat flow simulation based on a kinematic model of mantle convection is given in Chapter 4. Mantle convection velocities are calculated based on the internal loading theory at first, the velocity field is used as the input to solve the thermal problem. Results show that calculated depth derivatives of the near surface temperature are closely correlated to the observed surface heat flow pattern. Higher heat flow values around midocean ridge systems can be reproduced very well. The predicted average temperature as a function of function of depth reveals that there are two thermal boundary layers, one is close to the surface and another is close to the core-mantle boundary, the rest of the mantle is nearly isothermal. Although, in most of the mantle, advection dominates the heat transfer, the conductive heat transfer is still locally important in the boundary layers and plays an important role for the surface heat flow pattern. The existence of surface plates is responsible for the long wavelength surface heat flow pattern.In Chapter 5, the effects on present-day crustal movement in the China Mainland resulted from the mantle convection are introduced. Using a dynamic method, we present a quantitative model for the present-day crustal movement in China. We consider not only the effect of the India-Eurasia collision, the gravitational potential energy difference of the Tibet Plateau, but also the contribution of the shear traction on the bottom of the lithosphere induced by the global mantle convection. The comparison between our results and the velocity field obtained from the GPS observation shows that our model satisfactorily reproduces the general picture of crustal deformation in China. Numerical modeling results reveal that the stress field on the base of the lithosphere induced by the mantle flow is probably a considerable factor that causes the movement and deformation of the lithosphere in continental China with its eflfcet focuing on the Eastern China A numerical research on the small-scale convection with variable viscosity in the upper mantle is introduced in Chapter 6. Based on a two-dimensional model, small-scale convection in the mantle-lithosphere system with variable viscosity is researched by using of finite element method. Variation of viscosity in exponential form with temperature is considered in this paper The results show that if viscosity is strongly temperature-dependent, the upper part of the system does not take a share in the convection and a stagnant lid, which is identified as lithosphere, is formed on the top of system because of low temperature and high viscosity. The calculated surface heat flow, topography and gravity anomaly are associated well with the convection pattern, namely, the regions with high heat flow and uplift correspond to the upwelling flow, and vice versa.In Chapter 7, we give a brief of future research subject: The inversion of lateral density heterogeneity in the mantle by minimizing the viscous dissipation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sedimentary-volcanic tuff (locally called "green-bean rock") formed during the early Middle Triassic volcanic event in Guizhou Province is characterized as being thin, stable, widespread, short in forming time and predominantly green in color. The green-bean rock is a perfect indicator for stratigraphic division. Its petrographic and geochemical features are unique, and it is composed mainly of glassy fragments and subordinately of crystal fragments and volcanic ash balls. Analysis of the major and trace elements and rare-earth elements ( REE), as well as the related diagrams, permits us to believe that the green-bean rock is acidic volcanic material of the calc-alkaline series formed in the Indosinian orogenic belt on the Sino-Vietnam border, which was atmospherically transported to the tectonically stable areas and then deposited as sedimentary-volcanic rocks there. According to the age of green-bean rock, it is deduced that the boundary age of the Middle-Lower Triassic overlain by the sedimentary-volcanic tuff is about 247 Ma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The past year has seen remarkable advances both in methanol to olefin process development and in understanding the catalysts and reactions invoked. The methanol to olefin process is now on the way to being commercialized locally with economic advantages in comparison with other natural gas utilization technologies and conventional naphtha cracking processes. Using a specially designed procedure, a catalyst for the selective synthesis of ethylene from methanol has been reliably reproduced. The relationships between catalyst properties and reaction performances are clearer than ever before.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test-bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fundamental understanding of the information carrying capacity of optical channels requires the signal and physical channel to be modeled quantum mechanically. This thesis considers the problems of distributing multi-party quantum entanglement to distant users in a quantum communication system and determining the ability of quantum optical channels to reliably transmit information. A recent proposal for a quantum communication architecture that realizes long-distance, high-fidelity qubit teleportation is reviewed. Previous work on this communication architecture is extended in two primary ways. First, models are developed for assessing the effects of amplitude, phase, and frequency errors in the entanglement source of polarization-entangled photons, as well as fiber loss and imperfect polarization restoration, on the throughput and fidelity of the system. Second, an error model is derived for an extension of this communication architecture that allows for the production and storage of three-party entangled Greenberger-Horne-Zeilinger states. A performance analysis of the quantum communication architecture in qubit teleportation and quantum secret sharing communication protocols is presented. Recent work on determining the channel capacity of optical channels is extended in several ways. Classical capacity is derived for a class of Gaussian Bosonic channels representing the quantum version of classical colored Gaussian-noise channels. The proof is strongly mo- tivated by the standard technique of whitening Gaussian noise used in classical information theory. Minimum output entropy problems related to these channel capacity derivations are also studied. These single-user Bosonic capacity results are extended to a multi-user scenario by deriving capacity regions for single-mode and wideband coherent-state multiple access channels. An even larger capacity region is obtained when the transmitters use non- classical Gaussian states, and an outer bound on the ultimate capacity region is presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Saliency Network proposed by Shashua and Ullman is a well-known approach to the problem of extracting salient curves from images while performing gap completion. This paper analyzes the Saliency Network. The Saliency Network is attractive for several reasons. First, the network generally prefers long and smooth curves over short or wiggly ones. While computing saliencies, the network also fills in gaps with smooth completions and tolerates noise. Finally, the network is locally connected, and its size is proportional to the size of the image. Nevertheless, our analysis reveals certain weaknesses with the method. In particular, we show cases in which the most salient element does not lie on the perceptually most salient curve. Furthermore, in some cases the saliency measure changes its preferences when curves are scaled uniformly. Also, we show that for certain fragmented curves the measure prefers large gaps over a few small gaps of the same total size. In addition, we analyze the time complexity required by the method. We show that the number of steps required for convergence in serial implementations is quadratic in the size of the network, and in parallel implementations is linear in the size of the network. We discuss problems due to coarse sampling of the range of possible orientations. We show that with proper sampling the complexity of the network becomes cubic in the size of the network. Finally, we consider the possibility of using the Saliency Network for grouping. We show that the Saliency Network recovers the most salient curve efficiently, but it has problems with identifying any salient curve other than the most salient one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I describe an exploration criterion that attempts to minimize the error of a learner by minimizing its estimated squared bias. I describe experiments with locally-weighted regression on two simple kinematics problems, and observe that this "bias-only" approach outperforms the more common "variance-only" exploration approach, even in the presence of noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amorphous computing is the study of programming ultra-scale computing environments of smart sensors and actuators cite{white-paper}. The individual elements are identical, asynchronous, randomly placed, embedded and communicate locally via wireless broadcast. Aggregating the processors into groups is a useful paradigm for programming an amorphous computer because groups can be used for specialization, increased robustness, and efficient resource allocation. This paper presents a new algorithm, called the clubs algorithm, for efficiently aggregating processors into groups in an amorphous computer, in time proportional to the local density of processors. The clubs algorithm is well-suited to the unique characteristics of an amorphous computer. In addition, the algorithm derives two properties from the physical embedding of the amorphous computer: an upper bound on the number of groups formed and a constant upper bound on the density of groups. The clubs algorithm can also be extended to find the maximal independent set (MIS) and $Delta + 1$ vertex coloring in an amorphous computer in $O(log N)$ rounds, where $N$ is the total number of elements and $Delta$ is the maximum degree.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An increasing number of parameter estimation tasks involve the use of at least two information sources, one complete but limited, the other abundant but incomplete. Standard algorithms such as EM (or em) used in this context are unfortunately not stable in the sense that they can lead to a dramatic loss of accuracy with the inclusion of incomplete observations. We provide a more controlled solution to this problem through differential equations that govern the evolution of locally optimal solutions (fixed points) as a function of the source weighting. This approach permits us to explicitly identify any critical (bifurcation) points leading to choices unsupported by the available complete data. The approach readily applies to any graphical model in O(n^3) time where n is the number of parameters. We use the naive Bayes model to illustrate these ideas and demonstrate the effectiveness of our approach in the context of text classification problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving {\\em weighted} low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general. We analyze, in addition, the nature of locally optimal solutions that arise in this context, demonstrate the utility of accommodating the weights in reconstructing the underlying low rank representation, and extend the formulation to non-Gaussian noise models such as classification (collaborative filtering).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on a study of how people look for information within email, files, and the Web. When locating a document or searching for a specific answer, people relied on their contextual knowledge of their information target to help them find it, often associating the target with a specific document. They appeared to prefer to use this contextual information as a guide in navigating locally in small steps to the desired document rather than directly jumping to their target. We found this behavior was especially true for people with unstructured information organization. We discuss the implications of our findings for the design of personal information management tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly becme prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends a recently developed method for locality-sensitive hashing, which finds approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions; we show how to find the set of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representation as well as dense depth representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations- analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations. The resulting system of coupled equations can be solved iteratively on a mesh-pf-processors computer, such as the Connection Machine. Furthermore, concurrent multi-grid methods are designed to speed the convergence of this digital algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes research about flow graphs - labeled, directed, acyclic graphs which abstract representations used in a variety of Artificial Intelligence applications. Flow graphs may be derived from flow grammars much as strings may be derived from string grammars; this derivation process forms a useful model for the stepwise refinement processes used in programming and other engineering domains. The central result of this report is a parsing algorithm for flow graphs. Given a flow grammar and a flow graph, the algorithm determines whether the grammar generates the graph and, if so, finds all possible derivations for it. The author has implemented the algorithm in LISP. The intent of this report is to make flow-graph parsing available as an analytic tool for researchers in Artificial Intelligence. The report explores the intuitions behind the parsing algorithm, contains numerous, extensive examples of its behavior, and provides some guidance for those who wish to customize the algorithm to their own uses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.