11 resultados para sets of words
em CaltechTHESIS
Resumo:
Various families of exact solutions to the Einstein and Einstein-Maxwell field equations of General Relativity are treated for situations of sufficient symmetry that only two independent variables arise. The mathematical problem then reduces to consideration of sets of two coupled nonlinear differential equations.
The physical situations in which such equations arise include: a) the external gravitational field of an axisymmetric, uncharged steadily rotating body, b) cylindrical gravitational waves with two degrees of freedom, c) colliding plane gravitational waves, d) the external gravitational and electromagnetic fields of a static, charged axisymmetric body, and e) colliding plane electromagnetic and gravitational waves. Through the introduction of suitable potentials and coordinate transformations, a formalism is presented which treats all these problems simultaneously. These transformations and potentials may be used to generate new solutions to the Einstein-Maxwell equations from solutions to the vacuum Einstein equations, and vice-versa.
The calculus of differential forms is used as a tool for generation of similarity solutions and generalized similarity solutions. It is further used to find the invariance group of the equations; this in turn leads to various finite transformations that give new, physically distinct solutions from old. Some of the above results are then generalized to the case of three independent variables.
Resumo:
Methodology for the preparation of allenes from propargylic hydrazine precursors under mild conditions is described. Oxidation of the propargylic hydrazines, which can be readily prepared from propargylic alcohols, with either of two azo oxidants, diethyl azodicarboxylate (DEAD) or 4-methyl 1,2-triazoline-3,5-dione (MTAD), effects conversion to the allenes, presumably via sigmatropic rearrangement of a monoalkyl diazene intermediate. This rearrangement is demonstrated to proceed with essentially complete stereospecificity. The application of this methodology to the preparation of other allenes, including two that are notable for their reactivity and thermal instability, is also described.
The structural and mechanistic study of a monoalkyl diazene intermediate in the oxidative transformation of propargylic hydrazines to allenes is described. The use of long-range heteronuclear NMR coupling constants for assigning monoalkyl diazene stereochemistry (E vs Z) is also discussed. Evidence is presented that all known monoalkyl diazenes are the E isomers, and the erroneous assignment of stereochemistry in the previous report of the preparation of (Z)-phenyldiazene is discussed.
The synthesis, characterization, and reactivity of 1,6-didehydro[10]annulene are described. This molecule has been recognized as an interesting synthetic target for over 40 years and represents the intersection of two sets of extensively studied molecules: nonbenzenoid aromatic compounds and molecules containing sterically compressed π-systems.The formation of 1,5-dehydronaphthalene from 1 ,6-didehydro[10]annulene is believed to be the prototype for cycloaromatizations that produce 1,4-dehydroaromatic species with the radical centers disposed anti about the newly formed single bond. The aromaticity of this annulene and the facility of its cycloaromatization are also analyzed.
Resumo:
In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.
In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.
Resumo:
The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.
A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.
In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.
Resumo:
This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.
HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.
The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolution (σm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.
Resumo:
The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.
A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.
To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.
An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.
The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.
A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.
The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.
Resumo:
Experimental Joule-Thomson measurements were made on gaseous propane at temperatures from 100 to 280˚F and at pressures from 8 to 66 psia. Joule-Thomson measurements were also made on gaseous n-butane at temperatures from 100 to 280˚ and at pressures from 8 to 42 psia. For propane, the values of these measurements ranged from 0.07986˚F/psi at 280˚F and 8.01 psia to 0.19685˚F/psi at 100˚F and 66.15 psia. For n-butane, the values ranged from 0.11031˚F/psi at 280˚F and 9.36 psia to 0.30141˚F/psi at 100˚F and 41.02 psia. The experimental values have a maximum error of 1.5 percent.
For n-butane, the measurements of this study did not agree with previous Joule-Thomson measurements made in the Laboratory in 1935. The application of a thermal-transfer correction to the previous experimental measurements would cause the two sets of data to agree. Calculated values of the Joule-Thomson coefficient from other types of p-v-t data did agree with the present measurements for n-butane.
The apparatus used to measure the experimental Joule-Thomson coefficients had a radial-flow porous thimble and was operated at pressure changes between 2.3 and 8.6 psi. The major difference between this and other Joule-Thomson apparatus was its larger weight rates of flow (up to 6 pounds per hour) at atmospheric pressure. The flow rate was shown to have an appreciable effect on non-isenthalpic Joule-Thomson measurements.
Photographic materials on pages 79-81 are essential and will not reproduced clearly on Xerox copies. Photographic copies should be ordered.
Resumo:
Yields were measured for 235U sputtered from UF4 by 16O, 19F, and 35Cl over the energy range ~.12 to 1.5 MeV/ amu sing a charge equilibrated beam in the stripped beam arrangement for all the incident ions and in the transmission arrangement for 19F and 35Cl. In addition, yields were measured for 19F incident in a wide range of discrete charge states. The angular dependence of all the measured yields were consistent with cosʋ. The stripped beam and transmission data were well fit by the form (Az2eqln(BƐ)/Ɛ)4 (where Ɛ was the ion energy in MeV/amu and zeq(Ɛ) was taken from Zeigler(80). The fitted values of B for the various sets of data were consistent with a constant B0, equal to 36.3 ± 2.7, independent of incident ion. The fitted values of A show no consistent variation with incident ion although a difference can be noted between the stripped beam and transmission values, the transmission values being higher.
The incident charge data were well fit by the assumptions that the sputtering yield depended locally on a power of the incident ion charge and that the sputtering from the surface is exponentially correlated to conditions in the bulk. The equilibrated sputtering yields derived from these data are in agreement with the stripped beam yields.
In addition, to aid in the understanding of these data, the data of Hakansson(80,81a,81b) were examined and contrasted with the UF4 results. The thermal models of Seiberling(80) and Watson(81) were discussed and compared to the data.
Resumo:
Let E be a compact subset of the n-dimensional unit cube, 1n, and let C be a collection of convex bodies, all of positive n-dimensional Lebesgue measure, such that C contains bodies with arbitrarily small measure. The dimension of E with respect to the covering class C is defined to be the number
dC(E) = sup(β:Hβ, C(E) > 0),
where Hβ, C is the outer measure
inf(Ʃm(Ci)β:UCi Ↄ E, Ci ϵ C) .
Only the one and two-dimensional cases are studied. Moreover, the covering classes considered are those consisting of intervals and rectangles, parallel to the coordinate axes, and those closed under translations. A covering class is identified with a set of points in the left-open portion, 1’n, of 1n, whose closure intersects 1n - 1’n. For n = 2, the outer measure Hβ, C is adopted in place of the usual:
Inf(Ʃ(diam. (Ci))β: UCi Ↄ E, Ci ϵ C),
for the purpose of studying the influence of the shape of the covering sets on the dimension dC(E).
If E is a closed set in 11, let M(E) be the class of all non-decreasing functions μ(x), supported on E with μ(x) = 0, x ≤ 0 and μ(x) = 1, x ≥ 1. Define for each μ ϵ M(E),
dC(μ) = lim/c → inf/0 log ∆μ(c)/log c , (c ϵ C)
where ∆μ(c) = v/x (μ(x+c) – μ(x)). It is shown that
dC(E) = sup (dC(μ):μ ϵ M(E)).
This notion of dimension is extended to a certain class Ӻ of sub-additive functions, and the problem of studying the behavior of dC(E) as a function of the covering class C is reduced to the study of dC(f) where f ϵ Ӻ. Specifically, the set of points in 11,
(*) {dB(F), dC(f)): f ϵ Ӻ}
is characterized by a comparison of the relative positions of the points of B and C. A region of the form (*) is always closed and doubly-starred with respect to the points (0, 0) and (1, 1). Conversely, given any closed region in 12, doubly-starred with respect to (0, 0) and (1, 1), there are covering classes B and C such that (*) is exactly that region. All of the results are shown to apply to the dimension of closed sets E. Similar results can be obtained when a finite number of covering classes are considered.
In two dimensions, the notion of dimension is extended to the class M, of functions f(x, y), non-decreasing in x and y, supported on 12 with f(x, y) = 0 for x · y = 0 and f(1, 1) = 1, by the formula
dC(f) = lim/s · t → inf/0 log ∆f(s, t)/log s · t , (s, t) ϵ C
where
∆f(s, t) = V/x, y (f(x+s, y+t) – f(x+s, y) – f(x, y+t) + f(x, t)).
A characterization of the equivalence dC1(f) = dC2(f) for all f ϵ M, is given by comparison of the gaps in the sets of products s · t and quotients s/t, (s, t) ϵ Ci (I = 1, 2).
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.
Resumo:
Combinatorial configurations known as t-designs are studied. These are pairs ˂B, ∏˃, where each element of B is a k-subset of ∏, and each t-design occurs in exactly λ elements of B, for some fixed integers k and λ. A theory of internal structure of t-designs is developed, and it is shown that any t-design can be decomposed in a natural fashion into a sequence of “simple” subdesigns. The theory is quite similar to the analysis of a group with respect to its normal subgroups, quotient groups, and homomorphisms. The analogous concepts of normal subdesigns, quotient designs, and design homomorphisms are all defined and used.
This structure theory is then applied to the class of t-designs whose automorphism groups are transitive on sets of t points. It is shown that if G is a permutation group transitive on sets of t letters and ф is any set of letters, then images of ф under G form a t-design whose parameters may be calculated from the group G. Such groups are discussed, especially for the case t = 2, and the normal structure of such designs is considered. Theorem 2.2.12 gives necessary and sufficient conditions for a t-design to be simple, purely in terms of the automorphism group of the design. Some constructions are given.
Finally, 2-designs with k = 3 and λ = 2 are considered in detail. These designs are first considered in general, with examples illustrating some of the configurations which can arise. Then an attempt is made to classify all such designs with an automorphism group transitive on pairs of points. Many cases are eliminated of reduced to combinations of Steiner triple systems. In the remaining cases, the simple designs are determined to consist of one infinite class and one exceptional case.