429 resultados para Cartesian
Resumo:
The brace notation, introduced by Allen and Csaszar (1993, J. chem. Phys., 98, 2983), provides a simple and compact way to deal with derivatives of arbitrary non-tensorial quantities. One of its main advantages is that it builds the permutational symmetry of the derivatives directly into the formalism. The brace notation is applied to formulate the general nth-order Cartesian derivatives of internal coordinates, and to provide closed forms for general, nth-order transformation equations of anharmonic force fields, expressed as Taylor series, from internal to Cartesian or normal coordinate spaces.
Resumo:
We report the results of variational calculations of the rovibrational energy levels of HCN for J = 0, 1 and 2, where we reproduce all the ca. 100 observed vibrational states for all observed isotopic species, with energies up to 18000 cm$^{-1}$, to about $\pm $1 cm$^{-1}$, and the corresponding rotational constants to about $\pm $0.001 cm$^{-1}$. We use a hamiltonian expressed in internal coordinates r$_{1}$, r$_{2}$ and $\theta $, using the exact expression for the kinetic energy operator T obtained by direct transformation from the cartesian representation. The potential energy V is expressed as a polynomial expansion in the Morse coordinates y$_{i}$ for the bond stretches and the interbond angle $\theta $. The basis functions are built as products of appropriately scaled Morse functions in the bond-stretches and Legendre or associated Legendre polynomials of cos $\theta $ in the angle bend, and we evaluate matrix elements by Gauss quadrature. The hamiltonian matripx is factorized using the full rovibrational symmetry, and the basis is contracted to an optimized form; the dimensions of the final hamiltonian matrix vary from 240 $\times $ 240 to 1000 $\times $ 1000.We believe that our calculation is converged to better than 1 cm$^{-1}$ at 18 000 cm$^{-1}$. Our potential surface is expressed in terms of 31 parameters, about half of which have been refined by least squares to optimize the fit to the experimental data. The advantages and disadvantages and the future potential of calculations of this type are discussed.
Resumo:
We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 Angstrom with an optimised hydrophobic term in the fitness function. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively. (C) 2008 American Institute of Physics.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
A proof using the methane tetrahedroid bond angle can be obtained by using spherical polar coordinates to calculate the Cartesian coordinates of the hydrogen atoms, then using the dot product of vectors.
Resumo:
In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.
Resumo:
Moving-least-squares (MLS) surfaces undergoing large deformations need periodic regeneration of the point set (point-set resampling) so as to keep the point-set density quasi-uniform. Previous work by the authors dealt with algebraic MLS surfaces, and proposed a resampling strategy based on defining the new points at the intersections of the MLS surface with a suitable set of rays. That strategy has very low memory requirements and is easy to parallelize. In this article new resampling strategies with reduced CPU-time cost are explored. The basic idea is to choose as set of rays the lines of a regular, Cartesian grid, and to fully exploit this grid: as data structure for search queries, as spatial structure for traversing the surface in a continuation-like algorithm, and also as approximation grid for an interpolated version of the MLS surface. It is shown that in this way a very simple and compact resampling technique is obtained, which cuts the resampling cost by half with affordable memory requirements.
Resumo:
A numerical method to approximate partial differential equations on meshes that do not conform to the domain boundaries is introduced. The proposed method is conceptually simple and free of user-defined parameters. Starting with a conforming finite element mesh, the key ingredient is to switch those elements intersected by the Dirichlet boundary to a discontinuous-Galerkin approximation and impose the Dirichlet boundary conditions strongly. By virtue of relaxing the continuity constraint at those elements. boundary locking is avoided and optimal-order convergence is achieved. This is shown through numerical experiments in reaction-diffusion problems. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
We consider a four dimensional field theory with target space being CP(N) which constitutes a generalization of the usual Skyrme-Faddeev model defined on CP(1). We show that it possesses an integrable sector presenting an infinite number of local conservation laws, which are associated to the hidden symmetries of the zero curvature representation of the theory in loop space. We construct an infinite class of exact solutions for that integrable submodel where the fields are meromorphic functions of the combinations (x(1) + i x(2)) and (x(3) + x(0)) of the Cartesian coordinates of four dimensional Minkowski space-time. Among those solutions we have static vortices and also vortices with waves traveling along them with the speed of light. The energy per unity of length of the vortices show an interesting and intricate interaction among the vortices and waves.
Resumo:
Urna proposta metodológica de ensino e o objeto deste trabalho. Para seu desenvolvimento recorre-se à história das ciências com o objetivo de esclarecer a atual si tuação epistemológica da Biologia e, conseqüentemente, da Biologia da Educação. A epistemologia da biologia é anall sada, com vistas à superação da racionalidade cartesiana. Esta racionalidade é elucidada por meio de urna análise his tórica de correntes presentes no pensamento biológico,atravês das quais se percebe a influência do positivismo na ló gica presente nessa ciência. Contrapondo-se à visão positivista e cartesiana ainda presente na Biologia, propõe-se um paradigma sistêmico, holÍstico e dialetico, que viabilize urna Vlsao integradora dessa área de conhecimento. O estudo da construção do conhecimento é realiza do corno subsídio para a elaboração de urna proposta constru tivista de ensino, que é desenvolvida numa Instituição Federal de Ensino Superior. Em paralelo, realizam-se observações de campo em duas outras Universidades, com vistas à análise de procedimentos metodológicos de ensino. A proposta de ensino segundo o construcionismo -di alêtico obtém êxito e recomendam-se o aprofundamento teóri co do construtivismo e amplo debate sobre a Biologia da Educação, com base na epistemologia da Biologia.
Resumo:
Neste trabalho aborda-se o conceito de interação como fator de relevante importância para o desenvolvimento da psicologia. Para tanto, analisou-se a evolução da ciência ocidental desde suas raízes na Grécia Antiga até aos dias atuais, ressaltando as revoluções científicas que trouxeram alguma modificação ao pensamento científico. Esse enfoque permitiu verificar-se que duas características marcantes predominaram na ciência: o dualismo e o mecanicismo que a Revolução Científica do século XVII acentuou graças à filosofia de Descartes e à física de Newton. Segundo o paradigma cartesiano-newtoniano, interação, em psicologia nada mais é que a atuação de alguma coisa sobre outra e vice-versa; em outros termos, o ser humano é o resultante de forças, quer sejam internas, quer sejam externas, que atuam sobre ele e as quais ele reage. Logo, interação revela uma conotação mecanicista linear de causa e efeito. A visão fisicalista e reducionista impediu, de certa forma, o desenvolvimento da psicologia que sofre, como toda a ciência em geral, uma transformação e consequente reformulação e criação de conceitos. Assim concluiu-se que, neste novo quadro, a psicologia considerando o homem como um sistema, isto é, um conjunto cujas partes, todas relevantes, integram numa luta permanente de ser e não ser, de ordem e desordem em constante reorganização, salienta-se o conceito de interação como um fator de conhecimento maior, mais completo do homem, um ser tão paradoxal e contraditório segundo metodologias inadequadas para sua investigação.