976 resultados para Computational geometry
Resumo:
We have presented two simple methods of ''unfixed-position shield'' and ''pulling out'' for making sharp STM Pt-Ir tips with low aspect ratio by electrochemical etching in KCN/NaOH aqueous solution and ECSTM tips coated with paraffin. By limiting the elec
Resumo:
The tube diameter in the reptation model is the distance between a given chain segment and its nearest segment in adjacent chains. This dimension is thus related to the cross-sectional area of polymer chains and the nearest approach among chains, without effects of thermal fluctuation and steric repulsion. Prior calculated tube diameters are much larger, about 5 times, than the actual chain cross-sectional areas. This is ascribed to the local freedom required for mutual rearrangement among neighboring chain segments. This tube diameter concept seems to us to infer a relationship to the corresponding entanglement spacing. Indeed, we report here that the critical molecular weight, M(c), for the onset of entanglements is found to be M(c) = 28 A/([R2]0/M), where A is the chain cross-sectional area and [R2]0 the mean-square end-to-end distance of a freely jointed chain of molecular weight M. The new, computed relationship between the critical number of backbone atoms for entanglement and the chain cross-sectional area of polymers, N(c) = A0,44, is concordant with the cross-sectional area of polymer chains being the parameter controlling the critical entanglement number of backbone atoms of flexible polymers.
Resumo:
A general characteristic of the electrochemical process coupling with a homogeneous catalytic reaction at an ultramicroelectrode under steady state is described. It was found that the electrochemical process coupling with homogeneous catalytic reaction has a similar steady state voltammetric wave at an ultramicroelectrode with arbitrary geometry. A method of determination for the kinetic constant of homogeneous catalytic reaction at an ultramicroelectrode with arbitrary geometry is proposed.
Resumo:
Starting from nonhydrostatic Boussinesq approximation equations, a general method is introduced to deduce the dispersion relationships. A comparative investigation is performed on inertia-gravity wave with horizontal lengths of 100, 10 and 1 km. These are examined using the second-order central difference scheme and the fourth-order compact difference scheme on vertical grids that are currently available from the perspectives of frequency, horizontal and vertical component of group velocity. These findings are compared to analytical solutions. The obtained results suggest that whether for the second-order central difference scheme or for the fourth-order compact difference scheme, Charny-Phillips and Lorenz ( L) grids are suitable for studying waves at the above-mentioned horizontal scales; the Lorenz time-staggered and Charny-Phillips time staggered (CPTS) grids are applicable only to the horizontal scales of less than 10 km, and N grid ( unstaggered grid) is unsuitable for simulating waves at any horizontal scale. Furthermore, by using fourth-order compact difference scheme with higher difference precision, the errors of frequency and group velocity in horizontal and vertical directions produced on all vertical grids in describing the waves with horizontal lengths of 1, 10 and 100 km cannot inevitably be decreased. So in developing a numerical model, the higher-order finite difference scheme, like fourth-order compact difference scheme, should be avoided as much as possible, typically on L and CPTS grids, since it will not only take many efforts to design program but also make the calculated group velocity in horizontal and vertical directions even worse in accuracy.
Resumo:
In this letter, a new wind-vector algorithm is presented that uses radar backscatter sigma(0) measurements at two adjacent subscenes of RADARSAT-1 synthetic aperture radar (SAR) images, with each subscene having slightly different geometry. Resultant wind vectors are validated using in situ buoy measurements and compared with wind vectors determined from a hybrid wind-retrieval model using wind directions determined by spectral analysis of wind-induced image streaks and observed by colocated QuikSCAT measurements. The hybrid wind-retrieval model consists of CMOD-IFR2 [applicable to C-band vertical-vertical (W) polarization] and a C-band copolarization ratio according to Kirchhoff scattering. The new algorithm displays improved skill in wind-vector estimation for RADARSAT-1 SAR data when compared to conventional wind-retrieval methodology. In addition, unlike conventional methods, the present method is applicable to RADARSAT-1 images both with and without visible streaks. However, this method requires ancillary data such as buoy measurements to resolve the ambiguity in retrieved wind direction.
Resumo:
This dissertation presents a series of irregular-grid based numerical technique for modeling seismic wave propagation in heterogeneous media. The study involves the generation of the irregular numerical mesh corresponding to the irregular grid scheme, the discretized version of motion equations under the unstructured mesh, and irregular-grid absorbing boundary conditions. The resulting numerical technique has been used in generating the synthetic data sets on the realistic complex geologic models that can examine the migration schemes. The motion equation discretization and modeling are based on Grid Method. The key idea is to use the integral equilibrium principle to replace the operator at each grid in Finite Difference scheme and variational formulation in Finite Element Method. The irregular grids of complex geologic model is generated by the Paving Method, which allow varying grid spacing according to meshing constraints. The grids have great quality at domain boundaries and contain equal quantities of nodes at interfaces, which avoids the interpolation of parameters and variables. The irregular grid absorbing boundary conditions is developed by extending the Perfectly Matched Layer method to the rotated local coordinates. The splitted PML equations of the first-order system is derived by using integral equilibrium principle. The proposed scheme can build PML boundary of arbitrary geometry in the computational domain, avoiding the special treatment at corners in a standard PML method and saving considerable memory and computation cost. The numerical implementation demonstrates the desired qualities of irregular grid based modeling technique. In particular, (1) smaller memory requirements and computational time are needed by changing the grid spacing according to local velocity; (2) Arbitrary surfaces and interface topographies are described accurately, thus removing the artificial reflection resulting from the stair approximation of the curved or dipping interfaces; (3) computational domain is significantly reduced by flexibly building the curved artificial boundaries using the irregular-grid absorbing boundary conditions. The proposed irregular grid approach is apply to reverse time migration as the extrapolation algorithm. It can discretize the smoothed velocity model by irregular grid of variable scale, which contributes to reduce the computation cost. The topography. It can also handle data set of arbitrary topography and no field correction is needed.
Resumo:
We review the progress made in computational vision, as represented by Marr's approach, in the last fifteen years. First, we briefly outline computational theories developed for low, middle and high-level vision. We then discuss in more detail solutions proposed to three representative problems in vision, each dealing with a different level of visual processing. Finally, we discuss modifications to the currently established computational paradigm that appear to be dictated by the recent developments in vision.
Resumo:
The computer science technique of computational complexity analysis can provide powerful insights into the algorithm-neutral analysis of information processing tasks. Here we show that a simple, theory-neutral linguistic model of syntactic agreement and ambiguity demonstrates that natural language parsing may be computationally intractable. Significantly, we show that it may be syntactic features rather than rules that can cause this difficulty. Informally, human languages and the computationally intractable Satisfiability (SAT) problem share two costly computional mechanisms: both enforce agreement among symbols across unbounded distances (Subject-Verb agreement) and both allow ambiguity (is a word a Noun or a Verb?).
Resumo:
This thesis introduces elements of a theory of design activity and a computational framework for developing design systems. The theory stresses the opportunistic nature of designing and the complementary roles of focus and distraction, the interdependence of evaluation and generation, the multiplicity of ways of seeing over the history of a design session versus the exclusivity of a given way of seeing over an arbitrarily short period, and the incommensurability of criteria used to evaluate a design. The thesis argues for a principle based rather than rule based approach to designing documents. The Discursive Generator is presented as a computational framework for implementing specific design systems, and a simple system for arranging blocks according to a set of formal principles is developed by way of illustration. Both shape grammars and constraint based systems are used to contrast current trends in design automation with the discursive approach advocated in the thesis. The Discursive Generator is shown to have some important properties lacking in other types of systems, such as dynamism, robustness and the ability to deal with partial designs. When studied in terms of a search metaphor, the Discursive Generator is shown to exhibit behavior which is radically different from some traditional search techniques, and to avoid some of the well-known difficulties associated with them.
Resumo:
We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewer-centered invariant we call "relative affine structure". Via a number of corollaries of our main results we show that our framework unifies previous work --- including Euclidean, projective and affine --- in a natural and simple way, and introduces new, extremely simple, algorithms for the tasks of reconstruction from multiple views, recognition by alignment, and certain image coding applications.
Resumo:
Does knowledge of language consist of symbolic rules? How do children learn and use their linguistic knowledge? To elucidate these questions, we present a computational model that acquires phonological knowledge from a corpus of common English nouns and verbs. In our model the phonological knowledge is encapsulated as boolean constraints operating on classical linguistic representations of speech sounds in term of distinctive features. The learning algorithm compiles a corpus of words into increasingly sophisticated constraints. The algorithm is incremental, greedy, and fast. It yields one-shot learning of phonological constraints from a few examples. Our system exhibits behavior similar to that of young children learning phonological knowledge. As a bonus the constraints can be interpreted as classical linguistic rules. The computational model can be implemented by a surprisingly simple hardware mechanism. Our mechanism also sheds light on a fundamental AI question: How are signals related to symbols?
Resumo:
Three-dimensional models which contain both geometry and texture have numerous applications such as urban planning, physical simulation, and virtual environments. A major focus of computer vision (and recently graphics) research is the automatic recovery of three-dimensional models from two-dimensional images. After many years of research this goal is yet to be achieved. Most practical modeling systems require substantial human input and unlike automatic systems are not scalable. This thesis presents a novel method for automatically recovering dense surface patches using large sets (1000's) of calibrated images taken from arbitrary positions within the scene. Physical instruments, such as Global Positioning System (GPS), inertial sensors, and inclinometers, are used to estimate the position and orientation of each image. Essentially, the problem is to find corresponding points in each of the images. Once a correspondence has been established, calculating its three-dimensional position is simply a matter of geometry. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem. The initial stage of the algorithm is completely local and scales linearly with the number of images. Subsequent stages are global in nature, exploit geometric constraints, and scale quadratically with the complexity of the underlying scene. We describe techniques for: 1) detecting and localizing surface patches; 2) refining camera calibration estimates and rejecting false positive surfels; and 3) grouping surface patches into surfaces and growing the surface along a two-dimensional manifold. We also discuss a method for producing high quality, textured three-dimensional models from these surfaces. Some of the most important characteristics of this approach are that it: 1) uses and refines noisy calibration estimates; 2) compensates for large variations in illumination; 3) tolerates significant soft occlusion (e.g. tree branches); and 4) associates, at a fundamental level, an estimated normal (i.e. no frontal-planar assumption) and texture with each surface patch.
Resumo:
This report describes a computational system with which phonologists may describe a natural language in terms of autosegmental phonology, currently the most advanced theory pertaining to the sound systems of human languages. This system allows linguists to easily test autosegmental hypotheses against a large corpus of data. The system was designed primarily with tonal systems in mind, but also provides support for tree or feature matrix representation of phonemes (as in The Sound Pattern of English), as well as syllable structures and other aspects of phonological theory. Underspecification is allowed, and trees may be specified before, during, and after rule application. The association convention is automatically applied, and other principles such as the conjunctivity condition are supported. The method of representation was designed such that rules are designated in as close a fashion as possible to the existing conventions of autosegmental theory while adhering to a textual constraint for maximum portability.
Resumo:
This thesis describes an investigation of retinal directional selectivity. We show intracellular (whole-cell patch) recordings in turtle retina which indicate that this computation occurs prior to the ganglion cell, and we describe a pre-ganglionic circuit model to account for this and other findings which places the non-linear spatio-temporal filter at individual, oriented amacrine cell dendrites. The key non-linearity is provided by interactions between excitatory and inhibitory synaptic inputs onto the dendrites, and their distal tips provide directionally selective excitatory outputs onto ganglion cells. Detailed simulations of putative cells support this model, given reasonable parameter constraints. The performance of the model also suggests that this computational substructure may be relevant within the dendritic trees of CNS neurons in general.
Resumo:
The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.