26 resultados para Industrial Extension Institute.
em CaltechTHESIS
Resumo:
The Los Angeles Harbor at San Pedro with its natural advantages, and the big development of these now underway, will very soon be the key to the traffic routes of Southern California. The Atchison, Topeka, and Santa Fe railway company realizing this and, not wishing to be caught asleep, has planned to build a line from El Segundo to the harbor. The developments of the harbor are not the only developments taking place in these localities and the proposed new line is intended to include these also.
Resumo:
Since the beginning of human relations, some of the more ambitious and more capable members of society have by various means found and practiced methods of exploiting the efforts of their fellowmen to their own personal interest. These individuals have been naturally gifted at organization and control and have been able to dominate their slower, less mentally active associates. It is a cumulative process, once having been started the act of further subjugation becoming easier and easier as the clever person gets more and more control over the other.
Resumo:
A locally integrable function is said to be of vanishing mean oscillation (VMO) if its mean oscillation over cubes in Rd converges to zero with the volume of the cubes. We establish necessary and sufficient conditions for a locally integrable function defined on a bounded measurable set of positive measure to be the restriction to that set of a VMO function.
We consider the similar extension problem pertaining to BMO(ρ) functions; that is, those VMO functions whose mean oscillation over any cube is O(ρ(l(Q))) where l(Q) is the length of Q and ρ is a positive, non-decreasing function with ρ(0+) = 0.
We apply these results to obtain sufficient conditions for a Blaschke sequence to be the zeros of an analytic BMO(ρ) function on the unit disc.
Resumo:
In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.
Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.
Resumo:
Transcranial magnetic stimulation (TMS) is a technique that stimulates the brain using a magnetic coil placed on the scalp. Since it is applicable to humans non-invasively, directly interfering with neural electrical activity, it is potentially a good tool to study the direct relationship between perceptual experience and neural activity. However, it has been difficult to produce a clear perceptible phenomenon with TMS of sensory areas, especially using a single magnetic pulse. Also, the biophysical mechanisms of magnetic stimulation of single neurons have been poorly understood.
In the psychophysical part of this thesis, perceptual phenomena induced by TMS of the human visual cortex are demonstrated as results of the interactions with visual inputs. We first introduce a method to create a hole, or a scotoma, in a flashed, large-field visual pattern using single-pulse TMS. Spatial aspects of the interactions are explored using the distortion effect of the scotoma depending on the visual pattern, which can be luminance-defined or illusory. Its similarity to the distortion of afterimages is also discussed. Temporal interactions are demonstrated in the filling-in of the scotoma with temporally adjacent visual features, as well as in the effective suppression of transient visual features. Also, paired-pulse TMS is shown to lead to different brightness modulations in transient and sustained visual stimuli.
In the biophysical part, we first develop a biophysical theory to simulate the effect of magnetic stimulation on arbitrary neuronal structure. Computer simulations are performed on cortical neuron models with realistic structure and channels, combined with the current injection that simulates magnetic stimulation. The simulation results account for general and basic characteristics of the macroscopic effects of TMS including our psychophysical findings, such as a long inhibitory effect, dependence on the background activity, and dependence on the direction of the induced electric field.
The perceptual effects and the cortical neuron model presented here provide foundations for the study of the relationship between perception and neural activity. Further insights would be obtained from extension of our model to neuronal networks and psychophysical studies based on predictions of the biophysical model.
Resumo:
This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.
In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.
In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.
In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.
Resumo:
The problem of the finite-amplitude folding of an isolated, linearly viscous layer under compression and imbedded in a medium of lower viscosity is treated theoretically by using a variational method to derive finite difference equations which are solved on a digital computer. The problem depends on a single physical parameter, the ratio of the fold wavelength, L, to the "dominant wavelength" of the infinitesimal-amplitude treatment, L_d. Therefore, the natural range of physical parameters is covered by the computation of three folds, with L/L_d = 0, 1, and 4.6, up to a maximum dip of 90°.
Significant differences in fold shape are found among the three folds; folds with higher L/L_d have sharper crests. Folds with L/L_d = 0 and L/L_d = 1 become fan folds at high amplitude. A description of the shape in terms of a harmonic analysis of inclination as a function of arc length shows this systematic variation with L/L_d and is relatively insensitive to the initial shape of the layer. This method of shape description is proposed as a convenient way of measuring the shape of natural folds.
The infinitesimal-amplitude treatment does not predict fold-shape development satisfactorily beyond a limb-dip of 5°. A proposed extension of the treatment continues the wavelength-selection mechanism of the infinitesimal treatment up to a limb-dip of 15°; after this stage the wavelength-selection mechanism no longer operates and fold shape is mainly determined by L/L_d and limb-dip.
Strain-rates and finite strains in the medium are calculated f or all stages of the L/L_d = 1 and L/L_d = 4.6 folds. At limb-dips greater than 45° the planes of maximum flattening and maximum flattening rat e show the characteristic orientation and fanning of axial-plane cleavage.
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.
We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.
We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.
Resumo:
We study the fundamental dynamic behavior of a special class of ordered granular systems in order to design new, structured materials with unique physical properties. The dynamic properties of granular systems are dictated by the nonlinear, Hertzian, potential in compression and zero tensile strength resulting from the discrete material structure. Engineering the underlying particle arrangement of granular systems allows for unique dynamic properties, not observed in natural, disordered granular media. While extensive studies on 1D granular crystals have suggested their usefulness for a variety of engineering applications, considerably less attention has been given to higher-dimensional systems. The extension of these studies in higher dimensions could enable the discovery of richer physical phenomena not possible in 1D, such as spatial redirection and anisotropic energy trapping. We present experiments, numerical simulation (based on a discrete particle model), and in some cases theoretical predictions for several engineered granular systems, studying the effects of particle arrangement on the highly nonlinear transient wave propagation to develop means for controlling the wave propagation pathways. The first component of this thesis studies the stress wave propagation resulting from a localized impulsive loading for three different 2D particle lattice structures: square, centered square, and hexagonal granular crystals. By varying the lattice structure, we observe a wide range of properties for the propagating stress waves: quasi-1D solitary wave propagation, fully 2D wave propagation with tunable wave front shapes, and 2D pulsed wave propagation. Additionally the effects of weak disorder, inevitably present in real granular systems, are investigated. The second half of this thesis studies the solitary wave propagation through 2D and 3D ordered networks of granular chains, reducing the effective density compared to granular crystals by selectively placing wave guiding chains to control the acoustic wave transmission. The rapid wave front amplitude decay exhibited by these granular networks makes them highly attractive for impact mitigation applications. The agreement between experiments, numerical simulations, and applicable theoretical predictions validates the wave guiding capabilities of these engineered granular crystals and networks and opens a wide range of possibilities for the realization of increasingly complex granular material design.
Resumo:
A long-standing challenge in transition metal catalysis is selective C–C bond coupling of simple feedstocks, such as carbon monoxide, ethylene or propylene, to yield value-added products. This work describes efforts toward selective C–C bond formation using early- and late-transition metals, which may have important implications for the production of fuels and plastics, as well as many other commodity chemicals.
The industrial Fischer-Tropsch (F-T) process converts synthesis gas (syngas, a mixture of CO + H2) into a complex mixture of hydrocarbons and oxygenates. Well-defined homogeneous catalysts for F-T may provide greater product selectivity for fuel-range liquid hydrocarbons compared to traditional heterogeneous catalysts. The first part of this work involved the preparation of late-transition metal complexes for use in syngas conversion. We investigated C–C bond forming reactions via carbene coupling using bis(carbene)platinum(II) compounds, which are models for putative metal–carbene intermediates in F-T chemistry. It was found that C–C bond formation could be induced by either (1) chemical reduction of or (2) exogenous phosphine coordination to the platinum(II) starting complexes. These two mild methods afforded different products, constitutional isomers, suggesting that at least two different mechanisms are possible for C–C bond formation from carbene intermediates. These results are encouraging for the development of a multicomponent homogeneous catalysis system for the generation of higher hydrocarbons.
A second avenue of research focused on the design and synthesis of post-metallocene catalysts for olefin polymerization. The polymerization chemistry of a new class of group 4 complexes supported by asymmetric anilide(pyridine)phenolate (NNO) pincer ligands was explored. Unlike typical early transition metal polymerization catalysts, NNO-ligated catalysts produce nearly regiorandom polypropylene, with as many as 30-40 mol % of insertions being 2,1-inserted (versus 1,2-inserted), compared to <1 mol % in most metallocene systems. A survey of model Ti polymerization catalysts suggests that catalyst modification pathways that could affect regioselectivity, such as C–H activation of the anilide ring, cleavage of the amine R-group, or monomer insertion into metal–ligand bonds are unlikely. A parallel investigation of a Ti–amido(pyridine)phenolate polymerization catalyst, which features a five- rather than a six-membered Ti–N chelate ring, but maintained a dianionic NNO motif, revealed that simply maintaining this motif was not enough to produce regioirregular polypropylene; in fact, these experiments seem to indicate that only an intact anilide(pyridine)phenolate ligated-complex will lead to regioirregular polypropylene. As yet, the underlying causes for the unique regioselectivity of anilide(pyridine)phenolate polymerization catalysts remains unknown. Further exploration of NNO-ligated polymerization catalysts could lead to the controlled synthesis of new types of polymer architectures.
Finally, we investigated the reactivity of a known Ti–phenoxy(imine) (Ti-FI) catalyst that has been shown to be very active for ethylene homotrimerization in an effort to upgrade simple feedstocks to liquid hydrocarbon fuels through co-oligomerization of heavy and light olefins. We demonstrated that the Ti-FI catalyst can homo-oligomerize 1-hexene to C12 and C18 alkenes through olefin dimerization and trimerization, respectively. Future work will include kinetic studies to determine monomer selectivity by investigating the relative rates of insertion of light olefins (e.g., ethylene) vs. higher α-olefins, as well as a more detailed mechanistic study of olefin trimerization. Our ultimate goal is to exploit this catalyst in a multi-catalyst system for conversion of simple alkenes into hydrocarbon fuels.
Resumo:
This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.
Resumo:
Crustal structure in Southern California is investigated using travel times from over 200 stations and thousands of local earthquakes. The data are divided into two sets of first arrivals representing a two-layer crust. The Pg arrivals have paths that refract at depths near 10 km and the Pn arrivals refract along the Moho discontinuity. These data are used to find lateral and azimuthal refractor velocity variations and to determine refractor topography.
In Chapter 2 the Pn raypaths are modeled using linear inverse theory. This enables statistical verification that static delays, lateral slowness variations and anisotropy are all significant parameters. However, because of the inherent size limitations of inverse theory, the full array data set could not be processed and the possible resolution was limited. The tomographic backprojection algorithm developed for Chapters 3 and 4 avoids these size problems. This algorithm allows us to process the data sequentially and to iteratively refine the solution. The variance and resolution for tomography are determined empirically using synthetic structures.
The Pg results spectacularly image the San Andreas Fault, the Garlock Fault and the San Jacinto Fault. The Mojave has slower velocities near 6.0 km/s while the Peninsular Ranges have higher velocities of over 6.5 km/s. The San Jacinto block has velocities only slightly above the Mojave velocities. It may have overthrust Mojave rocks. Surprisingly, the Transverse Ranges are not apparent at Pg depths. The batholiths in these mountains are possibly only surficial.
Pn velocities are fast in the Mojave, slow in Southern California Peninsular Ranges and slow north of the Garlock Fault. Pn anisotropy of 2% with a NWW fast direction exists in Southern California. A region of thin crust (22 km) centers around the Colorado River where the crust bas undergone basin and range type extension. Station delays see the Ventura and Los Angeles Basins but not the Salton Trough, where high velocity rocks underlie the sediments. The Transverse Ranges have a root in their eastern half but not in their western half. The Southern Coast Ranges also have a thickened crust but the Peninsular Ranges have no major root.