850 resultados para articulated motion structure learning
Resumo:
Protein structure prediction has remained a major challenge in structural biology for more than half a century. Accelerated and cost efficient sequencing technologies have allowed researchers to sequence new organisms and discover new protein sequences. Novel protein structure prediction technologies will allow researchers to study the structure of proteins and to determine their roles in the underlying biology processes and develop novel therapeutics.
Difficulty of the problem stems from two folds: (a) describing the energy landscape that corresponds to the protein structure, commonly referred to as force field problem; and (b) sampling of the energy landscape, trying to find the lowest energy configuration that is hypothesized to be the native state of the structure in solution. The two problems are interweaved and they have to be solved simultaneously. This thesis is composed of three major contributions. In the first chapter we describe a novel high-resolution protein structure refinement algorithm called GRID. In the second chapter we present REMCGRID, an algorithm for generation of low energy decoy sets. In the third chapter, we present a machine learning approach to ranking decoys by incorporating coarse-grain features of protein structures.
Resumo:
This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.
In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.
The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.
Resumo:
The purpose of this work is to extend experimental and theoretical understanding of horizontal Bloch line (HBL) motion in magnetic bubble materials. The present theory of HBL motion is reviewed, and then extended to include transient effects in which the internal domain wall structure changes with time. This is accomplished by numerically solving the equations of motion for the internal azimuthal angle ɸ and the wall position q as functions of z, the coordinate perpendicular to the thin-film material, and time. The effects of HBL's on domain wall motion are investigated by comparing results from wall oscillation experiments with those from the theory. In these experiments, a bias field pulse is used to make a step change in equilibrium position of either bubble or stripe domain walls, and the wall response is measured by using transient photography. During the initial response, the dynamic wall structure closely resembles the initial static structure. The wall accelerates to a relatively high velocity (≈20 m/sec), resulting in a short (≈22 nsec ) section of initial rapid motion. An HBL gradually forms near one of the film surfaces as a result of local dynamic properties, and moves along the wall surface toward the film center. The presence of this structure produces low-frequency, triangular-shaped oscillations in which the experimental wall velocity is nearly constant, vs≈ 5-8 m/sec. If the HBL reaches the opposite surface, i.e., if the average internal angle reaches an integer multiple of π, the momentum stored in the HBL is lost, and the wall chirality is reversed. This results in abrupt transitions to overdamped motion and changes in wall chirality, which are observed as a function of bias pulse amplitude. The pulse amplitude at which the nth punch- through occurs just as the wall reaches equilibrium is given within 0.2 0e by Hn = (2vsH'/γ)1/2 • (nπ)1/2 + Hsv), where H' is the effective field gradient from the surrounding domains, and Hsv is a small (less than 0.03 0e), effective drag field. Observations of wall oscillation in the presence of in-plane fields parallel to the wall show that HBL formation is suppressed by fields greater than about 40 0e (≈2πMs), resulting in the high-frequency, sinusoidal oscillations associated with a simple internal wall structure.
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
The effect of intermolecular coupling in molecular energy levels (electronic and vibrational) has been investigated in neat and isotopic mixed crystals of benzene. In the isotopic mixed crystals of C6H6, C6H5D, m-C6H4D2, p-C6H4D2, sym-C6H3D3, C6D5H, and C6D6 in either a C6H6 or C6D6 host, the following phenomena have been observed and interpreted in terms of a refined Frenkel exciton theory: a) Site shifts; b) site group splittings of the degenerate ground state vibrations of C6H6, C6D6, and sym-C6H3D3; c) the orientational effect for the isotopes without a trigonal axis in both the 1B2u electronic state and the ground state vibrations; d) intrasite Fermi resonance between molecular fundamentals due to the reduced symmetry of the crystal site; and e) intermolecular or intersite Fermi resonance between nearly degenerate states of the host and guest molecules. In the neat crystal experiments on the ground state vibrations it was possible to observe many of these phenomena in conjunction with and in addition to the exciton structure.
To theoretically interpret these diverse experimental data, the concepts of interchange symmetry, the ideal mixed crystal, and site wave functions have been developed and are presented in detail. In the interpretation of the exciton data the relative signs of the intermolecular coupling constants have been emphasized, and in the limit of the ideal mixed crystal a technique is discussed for locating the exciton band center or unobserved exciton components. A differentiation between static and dynamic interactions is made in the Frenkel limit which enables the concepts of site effects and exciton coupling to be sharpened. It is thus possible to treat the crystal induced effects in such a fashion as to make their similarities and differences quite apparent.
A calculation of the ground state vibrational phenomena (site shifts and splittings, orientational effects, and exciton structure) and of the crystal lattice modes has been carried out for these systems. This calculation serves as a test of the approximations of first order Frenkel theory and the atom-atom, pair wise interaction model for the intermolecular potentials. The general form of the potential employed was V(r) = Be-Cr - A/r6 ; the force constants were obtained from the potential by assuming the atoms were undergoing simple harmonic motion.
In part II the location and identification of the benzene first and second triplet states (3B1u and 3E1u) is given.
Resumo:
188 p.
Resumo:
To understand harbor seal social and mating strategies, I examined site fidelity, seasonal abundance and distribution, herd integrity, and underwater behavior of individual harbor seals in southern Monterey Bay. Individual harbor seals (n = 444) were identified by natural markings and represented greater than 80% of an estimated 520 seals within this community. Year to year fidelity of individual harbor seals to southern Monterey Bay coastline was 84% (n = 388), and long-term associations (>2 yrs) among individuals were common (>40%). Consistent with these long-term associations, harbor seals were highly social underwater throughout the year. Underwater social behavior included three primary types: (1) visual and acoustic displays, such as vocalizing, surface splashing, and bubble-blowing; (2) playful or agonistic social behavior such as rolling, mounting, attending, and biting; and (3) signal gestures such as head-thrusting, fore-flipper scratch~ng, and growling. Frequency of these types of behavior was related to seal age, gender, season, and resource availability. Underwater behavior had a variety of functions, including promotion of learning and social development, reduction of aggression and preservation of social bonds by maintaining social hierarchy, and facilitation of mate selection during breeding season. Social behavior among adult males was significantly correlated with vocalization characteristics (r = 0.99, X2 = 37.7, p = 0.00087), indicating that seals may assess their competition based on underwater vocalization displays and adopt individual strategies for attracting females during breeding season based on social status. Individual mating strategies may include defending underwater territories, using scramble tactics, and developing social alliances. (PDF contains 105 pages)
Resumo:
Background: The high demanding computational requirements necessary to carry out protein motion simulations make it difficult to obtain information related to protein motion. On the one hand, molecular dynamics simulation requires huge computational resources to achieve satisfactory motion simulations. On the other hand, less accurate procedures such as interpolation methods, do not generate realistic morphs from the kinematic point of view. Analyzing a protein's movement is very similar to serial robots; thus, it is possible to treat the protein chain as a serial mechanism composed of rotational degrees of freedom. Recently, based on this hypothesis, new methodologies have arisen, based on mechanism and robot kinematics, to simulate protein motion. Probabilistic roadmap method, which discretizes the protein configurational space against a scoring function, or the kinetostatic compliance method that minimizes the torques that appear in bonds, aim to simulate protein motion with a reduced computational cost. Results: In this paper a new viewpoint for protein motion simulation, based on mechanism kinematics is presented. The paper describes a set of methodologies, combining different techniques such as structure normalization normalization processes, simulation algorithms and secondary structure detection procedures. The combination of all these procedures allows to obtain kinematic morphs of proteins achieving a very good computational cost-error rate, while maintaining the biological meaning of the obtained structures and the kinematic viability of the obtained motion. Conclusions: The procedure presented in this paper, implements different modules to perform the simulation of the conformational change suffered by a protein when exerting its function. The combination of a main simulation procedure assisted by a secondary structure process, and a side chain orientation strategy, allows to obtain a fast and reliable simulations of protein motion.
Resumo:
When we have learned a motor skill, such as cycling or ice-skating, we can rapidly generalize to novel tasks, such as motorcycling or rollerblading [1-8]. Such facilitation of learning could arise through two distinct mechanisms by which the motor system might adjust its control parameters. First, fast learning could simply be a consequence of the proximity of the original and final settings of the control parameters. Second, by structural learning [9-14], the motor system could constrain the parameter adjustments to conform to the control parameters' covariance structure. Thus, facilitation of learning would rely on the novel task parameters' lying on the structure of a lower-dimensional subspace that can be explored more efficiently. To test between these two hypotheses, we exposed subjects to randomly varying visuomotor tasks of fixed structure. Although such randomly varying tasks are thought to prevent learning, we show that when subsequently presented with novel tasks, subjects exhibit three key features of structural learning: facilitated learning of tasks with the same structure, strong reduction in interference normally observed when switching between tasks that require opposite control strategies, and preferential exploration along the learned structure. These results suggest that skill generalization relies on task variation and structural learning.
Resumo:
Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations--the mapping between actual and visual location of the hand--during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.
Resumo:
The partially observable Markov decision process (POMDP) provides a popular framework for modelling spoken dialogue. This paper describes how the expectation propagation algorithm (EP) can be used to learn the parameters of the POMDP user model. Various special probability factors applicable to this task are presented, which allow the parameters be to learned when the structure of the dialogue is complex. No annotations, neither the true dialogue state nor the true semantics of user utterances, are required. Parameters optimised using the proposed techniques are shown to improve the performance of both offline transcription experiments as well as simulated dialogue management performance. ©2010 IEEE.
Resumo:
This paper describes large-scale simulations of compressible flows over a supersonic disk-gap-band parachute system. An adaptive mesh refinement method is used to resolve the coupled fluid-structure model. The fluid model employs large-eddy simulation to describe the turbulent wakes appearing upstream and downstream of the parachute canopy and the structural model employed a thin-shell finite element solver that allows large canopy deformations by using subdivision finite elements. The fluid-structure interaction is described by a variant of the Ghost-Fluid method. The simulation was carried out at Mach number 1.96 where strong nonlinear coupling between the system of bow shocks, turbulent wake and canopy is observed. It was found that the canopy oscillations were characterized by a breathing type motion due to the strong interaction of the turbulent wake and bow shock upstream of the flexible canopy. Copyright © 2010 by ASME.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.