16 resultados para Nature inspired algorithms

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancellation of interfering frequency-modulated (FM) signals is investigated with emphasis towards applications on the cellular telephone channel as an important example of a multiple access communications system. In order to fairly evaluate analog FM multiaccess systems with respect to more complex digital multiaccess systems, a serious attempt to mitigate interference in the FM systems must be made. Information-theoretic results in the field of interference channels are shown to motivate the estimation and subtraction of undesired interfering signals. This thesis briefly examines the relative optimality of the current FM techniques in known interference channels, before pursuing the estimation and subtracting of interfering FM signals.

The capture-effect phenomenon of FM reception is exploited to produce simple interference-cancelling receivers with a cross-coupled topology. The use of phase-locked loop receivers cross-coupled with amplitude-tracking loops to estimate the FM signals is explored. The theory and function of these cross-coupled phase-locked loop (CCPLL) interference cancellers are examined. New interference cancellers inspired by optimal estimation and the CCPLL topology are developed, resulting in simpler receivers than those in prior art. Signal acquisition and capture effects in these complex dynamical systems are explained using the relationship of the dynamical systems to adaptive noise cancellers.

FM interference-cancelling receivers are considered for increasing the frequency reuse in a cellular telephone system. Interference mitigation in the cellular environment is seen to require tracking of the desired signal during time intervals when it is not the strongest signal present. Use of interference cancelling in conjunction with dynamic frequency-allocation algorithms is viewed as a way of improving spectrum efficiency. Performance of interference cancellers indicates possibilities for greatly increased frequency reuse. The economics of receiver improvements in the cellular system is considered, including both the mobile subscriber equipment and the provider's tower (base station) equipment.

The thesis is divided into four major parts and a summary: the introduction, motivations for the use of interference cancellation, examination of the CCPLL interference canceller, and applications to the cellular channel. The parts are dependent on each other and are meant to be read as a whole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a theoretical study of electronic states in topological insulators with impurities. Chiral edge states in 2d topological insulators and helical surface states in 3d topological insulators show a robust transport against nonmagnetic impurities. Such a nontrivial character inspired physicists to come up with applications such as spintronic devices [1], thermoelectric materials [2], photovoltaics [3], and quantum computation [4]. Not only has it provided new opportunities from a practical point of view, but its theoretical study has deepened the understanding of the topological nature of condensed matter systems. However, experimental realizations of topological insulators have been challenging. For example, a 2d topological insulator fabricated in a HeTe quantum well structure by Konig et al. [5] shows a longitudinal conductance which is not well quantized and varies with temperature. 3d topological insulators such as Bi2Se3 and Bi2Te3 exhibit not only a signature of surface states, but they also show a bulk conduction [6]. The series of experiments motivated us to study the effects of impurities and coexisting bulk Fermi surface in topological insulators. We first address a single impurity problem in a topological insulator using a semiclassical approach. Then we study the conductance behavior of a disordered topological-metal strip where bulk modes are associated with the transport of edge modes via impurity scattering. We verify that the conduction through a chiral edge channel retains its topological signature, and we discovered that the transmission can be succinctly expressed in a closed form as a ratio of determinants of the bulk Green's function and impurity potentials. We further study the transport of 1d systems which can be decomposed in terms of chiral modes. Lastly, the surface impurity effect on the local density of surface states over layers into the bulk is studied between weak and strong disorder strength limits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein structure prediction has remained a major challenge in structural biology for more than half a century. Accelerated and cost efficient sequencing technologies have allowed researchers to sequence new organisms and discover new protein sequences. Novel protein structure prediction technologies will allow researchers to study the structure of proteins and to determine their roles in the underlying biology processes and develop novel therapeutics.

Difficulty of the problem stems from two folds: (a) describing the energy landscape that corresponds to the protein structure, commonly referred to as force field problem; and (b) sampling of the energy landscape, trying to find the lowest energy configuration that is hypothesized to be the native state of the structure in solution. The two problems are interweaved and they have to be solved simultaneously. This thesis is composed of three major contributions. In the first chapter we describe a novel high-resolution protein structure refinement algorithm called GRID. In the second chapter we present REMCGRID, an algorithm for generation of low energy decoy sets. In the third chapter, we present a machine learning approach to ranking decoys by incorporating coarse-grain features of protein structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

FRAME3D, a program for the nonlinear seismic analysis of steel structures, has previously been used to study the collapse mechanisms of steel buildings up to 20 stories tall. The present thesis is inspired by the need to conduct similar analysis for much taller structures. It improves FRAME3D in two primary ways.

First, FRAME3D is revised to address specific nonlinear situations involving large displacement/rotation increments, the backup-subdivide algorithm, element failure, and extremely narrow joint hysteresis. The revisions result in superior convergence capabilities when modeling earthquake-induced collapse. The material model of a steel fiber is also modified to allow for post-rupture compressive strength.

Second, a parallel FRAME3D (PFRAME3D) is developed. The serial code is optimized and then parallelized. A distributed-memory divide-and-conquer approach is used for both the global direct solver and element-state updates. The result is an implicit finite-element hybrid-parallel program that takes advantage of the narrow-band nature of very tall buildings and uses nearest-neighbor-only communication patterns.

Using three structures of varied sized, PFRAME3D is shown to compute reproducible results that agree with that of the optimized 1-core version (displacement time-history response root-mean-squared errors are ~〖10〗^(-5) m) with much less wall time (e.g., a dynamic time-history collapse simulation of a 60-story building is computed in 5.69 hrs with 128 cores—a speedup of 14.7 vs. the optimized 1-core version). The maximum speedups attained are shown to increase with building height (as the total number of cores used also increases), and the parallel framework can be expected to be suitable for buildings taller than the ones presented here.

PFRAME3D is used to analyze a hypothetical 60-story steel moment-frame tube building (fundamental period of 6.16 sec) designed according to the 1994 Uniform Building Code. Dynamic pushover and time-history analyses are conducted. Multi-story shear-band collapse mechanisms are observed around mid-height of the building. The use of closely-spaced columns and deep beams is found to contribute to the building's “somewhat brittle” behavior (ductility ratio ~2.0). Overall building strength is observed to be sensitive to whether a model is fracture-capable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have sought to determine the nature of the free-radical precursors to ring-opened hydrocarbon 5 and ring-closed hydrocarbon 6. Reasonable alternative formulations involve the postulation of hydrogen abstraction (a) by a pair of rapidly equilibrating classical radicals (the ring-opened allylcarbinyl-type radical 3 and the ring-closed cyclopropylcarbinyl-type 4), or (b) by a nonclassical radical such as homoallylic radical 7.

[Figure not reproduced.]

Entry to the radical system is gained via degassed thermal decomposition of peresters having the ring-opened and the ring-closed structures. The ratio of 6:5 is essentially independent of the hydrogen donor concentration for decomposition of the former at 125° in the presence of triethyltin hydrdride. A deuterium labeling study showed that the α and β methylene groups in 3 (or the equivalent) are rapidly interchanged under these conditions.

Existence of two (or more) product-forming intermediates is indicated (a) by dependence of the ratio 6:5 on the tin hydride concentration for decomposition of the ring-closed perester at 10 and 35°, and (b) by formation of cage products having largely or wholly the structure (ring-opened or ring-closed) of the starting perester.

Relative rates of hydrogen abstraction by 3 could be inferred by comparison of ratios of rate constants for hydrogen abstraction and ortho-ring cyclization:

[Figure not reproduced.]

At 100° values of ka/kr are 0.14 for hydrogen abstraction from 1,4-cyclohexadiene and 7 for abstraction from triethyltin hydride. The ratio 6:5 at the same temperature is ~0.0035 for hydrogen abstraction from 1,4-cyclohexadiene, ~0.078 for abstraction from the tin hydride, and ≥ 5 for abstraction from cyclohexadienyl radicals. These data indicate that abstraction of hydrogen from triethyltin hydride is more rapid than from 1,4-cyclohexadiene by a factor of ~1000 for 4, but only ~50 for 3.

Measurements of product ratios at several temperatures allowed the construction of an approximate energy-level scheme. A major inference is that isomerization of 3 to 4 is exothermic by 8 ± 3 kcal/mole, in good agreement with expectations based on bond dissociation energies. Absolute rate-constant estimates are also given.

The results are nicely compatible with a classical-radical mechanism, but attempted interpretation in terms of a nonclassical radical precursor of product ratios formed even from equilibrated radical intermediates leads, it is argued, to serious difficulties.

The roles played by hydrogen abstraction from 1,4,-cyclohexadiene and from the derived cyclohexadienyl radicals were probed by fitting observed ratios of 6:5 and 5:10 in the sense of least-squares to expressions derived for a complex mechanistic scheme. Some 30 to 40 measurements on each product ratio, obtained under a variety of experimental conditions, could be fit with an average deviation of ~6%. Significant systematic deviations were found, but these could largely be redressed by assuming (a) that the rate constant for reaction of 4 with cyclohexadienyl radical is inversely proportional to the viscosity of the medium (i.e., is diffusion-controlled), and (b) that ka/kr for hydrogen abstraction from 1,4-cyclohexadiene depends slightly on the composition of the medium. An average deviation of 4.4% was thereby attained.

Degassed thermal decomposition of the ring-opened perester in the presence of the triethyltin hydride occurs primarily by attack on perester of triethyltin radicals, presumably at the –O-O- bond, even at 0.01 M tin hydride at 100 and 125°. Tin ester and tin ether are apparently formed in closely similar amounts under these conditions, but the tin ester predominates at room temperature in the companion air-induced decomposition, indicating that attack on perester to give the tin ether requires an activation energy approximately 5 kcal/mole in excess of that for the formation of tin ester.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. The influence of N,N,N’,N’-tetramethylethylenediamine on the Schlenk equilibrium

The equilibrium between ethylmagnesium bromide, diethylmagnesium, and magnesium bromide has been studied by nuclear magnetic resonance spectroscopy. The interconversion of the species is very fast on the nmr time scale, and only an averaged spectrum is observed for the ethyl species. When N,N,N’,N’-tetramethylethylenediamine is added to solutions of these reagents in tetrahydrofuran, the rate of interconversion is reduced. At temperatures near -50°, two ethylmagnesium species have been observed. These are attributed to the different ethyl groups in ethylmagnesium bromide and diethylmagnesium, two of the species involved in the Schlenk equilibrium of Grignard reagents.

II. The nature of di-Grignard reagents

Di-Grignard reagents have been examined by nuclear magnetic resonance spectroscopy in an attempt to prove that dialkylmagnesium reagents are in equilibrium with alkylmagnesium halides. The di-Grignard reagents of compounds such as 1,4-dibromobutane have been investigated. The dialkylmagnesium form of this di-Grignard reagent can exist as an intramolecular cyclic species, tetramethylene-magnesium. This cyclic form would give an nmr spectrum different from that of the classical alkylmagnesium halide di-Grignard reagent. In dimethyl ether-tetrahydrofuran solutions of di-Grignard reagents containing N N,N,N’,N’-Tetramethylethylenediamine, evidence has been found for the existence of an intramolecular dialkylmagnesium species. This species is rapidly equilibrating with other forms, but at low temperatures, the rates of interconversion are reduced. Two species can be seen in the nmr spectrum at -50°. One is the cyclic species; the other is an open form.

Inversion of the carbon at the carbon-magnesium bond in di-Grignard reagents has also been studied. This process is much faster than in corresponding monofunctional Grignard reagents.