12 resultados para articulated motion structure learning

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Six topics in incompressible, inviscid fluid flow involving vortex motion are presented. The stability of the unsteady flow field due to the vortex filament expanding under the influence of an axial compression is examined in the first chapter as a possible model of the vortex bursting observed in aircraft contrails. The filament with a stagnant core is found to be unstable to axisymmetric disturbances. For initial disturbances with the form of axisymmetric Kelvin waves, the filament with a uniformly rotating core is neutrally stable, but the compression causes the disturbance to undergo a rapid increase in amplitude. The time at which the increase occurs is, however, later than the observed bursting times, indicating the bursting phenomenon is not caused by this type of instability.

In the second and third chapters the stability of a steady vortex filament deformed by two-dimensional strain and shear flows, respectively, is examined. The steady deformations are in the plane of the vortex cross-section. Disturbances which deform the filament centerline into a wave which does not propagate along the filament are shown to be unstable and a method is described to calculate the wave number and corresponding growth rate of the amplified waves for a general distribution of vorticity in the vortex core.

In Chapter Four exact solutions are constructed for two-dimensional potential flow over a wing with a free ideal vortex standing over the wing. The loci of positions of the free vortex are found and the lift is calculated. It is found that the lift on the wing can be significantly increased by the free vortex.

The two-dimensional trajectories of an ideal vortex pair near an orifice are calculated in Chapter Five. Three geometries are examined, and the criteria for the vortices to travel away from the orifice are determined.

Finally, Chapter Six reproduces completely the paper, "Structure of a linear array of hollow vortices of finite cross-section," co-authored with G. R. Baker and P. G. Saffman. Free streamline theory is employed to construct an exact steady solution for a linear array of hollow, or stagnant cored vortices. If each vortex has area A and the separation is L, then there are two possible shapes if A^(1/2)/L is less than 0.38 and none if it is larger. The stability of the shapes to two-dimensional, periodic and symmetric disturbances is considered for hollow vortices. The more deformed of the two possible shapes is found to be unstable, while the less deformed shape is stable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurons in the songbird forebrain nucleus HVc are highly sensitive to auditory temporal context and have some of the most complex auditory tuning properties yet discovered. HVc is crucial for learning, perceiving, and producing song, thus it is important to understand the neural circuitry and mechanisms that give rise to these remarkable auditory response properties. This thesis investigates these issues experimentally and computationally.

Extracellular studies reported here compare the auditory context sensitivity of neurons in HV c with neurons in the afferent areas of field L. These demonstrate that there is a substantial increase in the auditory temporal context sensitivity from the areas of field L to HVc. Whole-cell recordings of HVc neurons from acute brain slices are described which show that excitatory synaptic transmission between HVc neurons involve the release of glutamate and the activation of both AMPA/kainate and NMDA-type glutamate receptors. Additionally, widespread inhibitory interactions exist between HVc neurons that are mediated by postsynaptic GABA_A receptors. Intracellular recordings of HVc auditory neurons in vivo provides evidence that HV c neurons encode information about temporal structure using a variety of cellular and synaptic mechanisms including syllable-specific inhibition, excitatory post-synaptic potentials with a range of different time courses, and burst-firing, and song-specific hyperpolarization.

The final part of this thesis presents two computational approaches for representing and learning temporal structure. The first method utilizes comput ational elements that are analogous to temporal combination sensitive neurons in HVc. A network of these elements can learn using local information and lateral inhibition. The second method presents a more general framework which allows a network to discover mixtures of temporal features in a continuous stream of input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The temporal structure of neuronal spike trains in the visual cortex can provide detailed information about the stimulus and about the neuronal implementation of visual processing. Spike trains recorded from the macaque motion area MT in previous studies (Newsome et al., 1989a; Britten et al., 1992; Zohary et al., 1994) are analyzed here in the context of the dynamic random dot stimulus which was used to evoke them. If the stimulus is incoherent, the spike trains can be highly modulated and precisely locked in time to the stimulus. In contrast, the coherent motion stimulus creates little or no temporal modulation and allows us to study patterns in the spike train that may be intrinsic to the cortical circuitry in area MT. Long gaps in the spike train evoked by the preferred direction motion stimulus are found, and they appear to be symmetrical to bursts in the response to the anti-preferred direction of motion. A novel cross-correlation technique is used to establish that the gaps are correlated between pairs of neurons. Temporal modulation is also found in psychophysical experiments using a modified stimulus. A model is made that can account for the temporal modulation in terms of the computational theory of biological image motion processing. A frequency domain analysis of the stimulus reveals that it contains a repeated power spectrum that may account for psychophysical and electrophysiological observations.

Some neurons tend to fire bursts of action potentials while others avoid burst firing. Using numerical and analytical models of spike trains as Poisson processes with the addition of refractory periods and bursting, we are able to account for peaks in the power spectrum near 40 Hz without assuming the existence of an underlying oscillatory signal. A preliminary examination of the local field potential reveals that stimulus-locked oscillation appears briefly at the beginning of the trial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein structure prediction has remained a major challenge in structural biology for more than half a century. Accelerated and cost efficient sequencing technologies have allowed researchers to sequence new organisms and discover new protein sequences. Novel protein structure prediction technologies will allow researchers to study the structure of proteins and to determine their roles in the underlying biology processes and develop novel therapeutics.

Difficulty of the problem stems from two folds: (a) describing the energy landscape that corresponds to the protein structure, commonly referred to as force field problem; and (b) sampling of the energy landscape, trying to find the lowest energy configuration that is hypothesized to be the native state of the structure in solution. The two problems are interweaved and they have to be solved simultaneously. This thesis is composed of three major contributions. In the first chapter we describe a novel high-resolution protein structure refinement algorithm called GRID. In the second chapter we present REMCGRID, an algorithm for generation of low energy decoy sets. In the third chapter, we present a machine learning approach to ranking decoys by incorporating coarse-grain features of protein structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.

In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.

The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work is to extend experimental and theoretical understanding of horizontal Bloch line (HBL) motion in magnetic bubble materials. The present theory of HBL motion is reviewed, and then extended to include transient effects in which the internal domain wall structure changes with time. This is accomplished by numerically solving the equations of motion for the internal azimuthal angle ɸ and the wall position q as functions of z, the coordinate perpendicular to the thin-film material, and time. The effects of HBL's on domain wall motion are investigated by comparing results from wall oscillation experiments with those from the theory. In these experiments, a bias field pulse is used to make a step change in equilibrium position of either bubble or stripe domain walls, and the wall response is measured by using transient photography. During the initial response, the dynamic wall structure closely resembles the initial static structure. The wall accelerates to a relatively high velocity (≈20 m/sec), resulting in a short (≈22 nsec ) section of initial rapid motion. An HBL gradually forms near one of the film surfaces as a result of local dynamic properties, and moves along the wall surface toward the film center. The presence of this structure produces low-frequency, triangular-shaped oscillations in which the experimental wall velocity is nearly constant, vs≈ 5-8 m/sec. If the HBL reaches the opposite surface, i.e., if the average internal angle reaches an integer multiple of π, the momentum stored in the HBL is lost, and the wall chirality is reversed. This results in abrupt transitions to overdamped motion and changes in wall chirality, which are observed as a function of bias pulse amplitude. The pulse amplitude at which the nth punch- through occurs just as the wall reaches equilibrium is given within 0.2 0e by Hn = (2vsH'/γ)1/2 • (nπ)1/2 + Hsv), where H' is the effective field gradient from the surrounding domains, and Hsv is a small (less than 0.03 0e), effective drag field. Observations of wall oscillation in the presence of in-plane fields parallel to the wall show that HBL formation is suppressed by fields greater than about 40 0e (≈2πMs), resulting in the high-frequency, sinusoidal oscillations associated with a simple internal wall structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.

Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.

Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of intermolecular coupling in molecular energy levels (electronic and vibrational) has been investigated in neat and isotopic mixed crystals of benzene. In the isotopic mixed crystals of C6H6, C6H5D, m-C6H4D2, p-C6H4D2, sym-C6H3D3, C6D5H, and C6D6 in either a C6H6 or C6D6 host, the following phenomena have been observed and interpreted in terms of a refined Frenkel exciton theory: a) Site shifts; b) site group splittings of the degenerate ground state vibrations of C6H6, C6D6, and sym-C6H3D3; c) the orientational effect for the isotopes without a trigonal axis in both the 1B2u electronic state and the ground state vibrations; d) intrasite Fermi resonance between molecular fundamentals due to the reduced symmetry of the crystal site; and e) intermolecular or intersite Fermi resonance between nearly degenerate states of the host and guest molecules. In the neat crystal experiments on the ground state vibrations it was possible to observe many of these phenomena in conjunction with and in addition to the exciton structure.

To theoretically interpret these diverse experimental data, the concepts of interchange symmetry, the ideal mixed crystal, and site wave functions have been developed and are presented in detail. In the interpretation of the exciton data the relative signs of the intermolecular coupling constants have been emphasized, and in the limit of the ideal mixed crystal a technique is discussed for locating the exciton band center or unobserved exciton components. A differentiation between static and dynamic interactions is made in the Frenkel limit which enables the concepts of site effects and exciton coupling to be sharpened. It is thus possible to treat the crystal induced effects in such a fashion as to make their similarities and differences quite apparent.

A calculation of the ground state vibrational phenomena (site shifts and splittings, orientational effects, and exciton structure) and of the crystal lattice modes has been carried out for these systems. This calculation serves as a test of the approximations of first order Frenkel theory and the atom-atom, pair wise interaction model for the intermolecular potentials. The general form of the potential employed was V(r) = Be-Cr - A/r6 ; the force constants were obtained from the potential by assuming the atoms were undergoing simple harmonic motion.

In part II the location and identification of the benzene first and second triplet states (3B1u and 3E1u) is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies mobile robotic manipulators, where one or more robot manipulator arms are integrated with a mobile robotic base. The base could be a wheeled or tracked vehicle, or it might be a multi-limbed locomotor. As robots are increasingly deployed in complex and unstructured environments, the need for mobile manipulation increases. Mobile robotic assistants have the potential to revolutionize human lives in a large variety of settings including home, industrial and outdoor environments.

Mobile Manipulation is the use or study of such mobile robots as they interact with physical objects in their environment. As compared to fixed base manipulators, mobile manipulators can take advantage of the base mechanism’s added degrees of freedom in the task planning and execution process. But their use also poses new problems in the analysis and control of base system stability, and the planning of coordinated base and arm motions. For mobile manipulators to be successfully and efficiently used, a thorough understanding of their kinematics, stability, and capabilities is required. Moreover, because mobile manipulators typically possess a large number of actuators, new and efficient methods to coordinate their large numbers of degrees of freedom are needed to make them practically deployable. This thesis develops new kinematic and stability analyses of mobile manipulation, and new algorithms to efficiently plan their motions.

I first develop detailed and novel descriptions of the kinematics governing the operation of multi- limbed legged robots working in the presence of gravity, and whose limbs may also be simultaneously used for manipulation. The fundamental stance constraint that arises from simple assumptions about friction and the ground contact and feasible motions is derived. Thereafter, a local relationship between joint motions and motions of the robot abdomen and reaching limbs is developed. Baseeon these relationships, one can define and analyze local kinematic qualities including limberness, wrench resistance and local dexterity. While previous researchers have noted the similarity between multi- fingered grasping and quasi-static manipulation, this thesis makes explicit connections between these two problems.

The kinematic expressions form the basis for a local motion planning problem that that determines the joint motions to achieve several simultaneous objectives while maintaining stance stability in the presence of gravity. This problem is translated into a convex quadratic program entitled the balanced priority solution, whose existence and uniqueness properties are developed. This problem is related in spirit to the classical redundancy resoxlution and task-priority approaches. With some simple modifications, this local planning and optimization problem can be extended to handle a large variety of goals and constraints that arise in mobile-manipulation. This local planning problem applies readily to other mobile bases including wheeled and articulated bases. This thesis describes the use of the local planning techniques to generate global plans, as well as for use within a feedback loop. The work in this thesis is motivated in part by many practical tasks involving the Surrogate and RoboSimian robots at NASA/JPL, and a large number of examples involving the two robots, both real and simulated, are provided.

Finally, this thesis provides an analysis of simultaneous force and motion control for multi- limbed legged robots. Starting with a classical linear stiffness relationship, an analysis of this problem for multiple point contacts is described. The local velocity planning problem is extended to include generation of forces, as well as to maintain stability using force-feedback. This thesis also provides a concise, novel definition of static stability, and proves some conditions under which it is satisfied.