991 resultados para Fan-Complete Space


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel account of the theory of commutative spectral triples and their two closest noncommutative generalisations, almost-commutative spectral triples and toric noncommutative manifolds, with a focus on reconstruction theorems, viz, abstract, functional-analytic characterisations of global-analytically defined classes of spectral triples. We begin by reinterpreting Connes's reconstruction theorem for commutative spectral triples as a complete noncommutative-geometric characterisation of Dirac-type operators on compact oriented Riemannian manifolds, and in the process clarify folklore concerning stability of properties of spectral triples under suitable perturbation of the Dirac operator. Next, we apply this reinterpretation of the commutative reconstruction theorem to obtain a reconstruction theorem for almost-commutative spectral triples. In particular, we propose a revised, manifestly global-analytic definition of almost-commutative spectral triple, and, as an application of this global-analytic perspective, obtain a general result relating the spectral action on the total space of a finite normal compact oriented Riemannian cover to that on the base space. Throughout, we discuss the relevant refinements of these definitions and results to the case of real commutative and almost-commutative spectral triples. Finally, we outline progess towards a reconstruction theorem for toric noncommutative manifolds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipid bilayer membranes are models for cell membranes--the structure that helps regulate cell function. Cell membranes are heterogeneous, and the coupling between composition and shape gives rise to complex behaviors that are important to regulation. This thesis seeks to systematically build and analyze complete models to understand the behavior of multi-component membranes.

We propose a model and use it to derive the equilibrium and stability conditions for a general class of closed multi-component biological membranes. Our analysis shows that the critical modes of these membranes have high frequencies, unlike single-component vesicles, and their stability depends on system size, unlike in systems undergoing spinodal decomposition in flat space. An important implication is that small perturbations may nucleate localized but very large deformations. We compare these results with experimental observations.

We also study open membranes to gain insight into long tubular membranes that arise for example in nerve cells. We derive a complete system of equations for open membranes by using the principle of virtual work. Our linear stability analysis predicts that the tubular membranes tend to have coiling shapes if the tension is small, cylindrical shapes if the tension is moderate, and beading shapes if the tension is large. This is consistent with experimental observations reported in the literature in nerve fibers. Further, we provide numerical solutions to the fully nonlinear equilibrium equations in some problems, and show that the observed mode shapes are consistent with those suggested by linear stability. Our work also proves that beadings of nerve fibers can appear purely as a mechanical response of the membrane.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The termite hindgut microbial ecosystem functions like a miniature lignocellulose-metabolizing natural bioreactor, has significant implications to nutrient cycling in the terrestrial environment, and represents an array of microbial metabolic diversity. Deciphering the intricacies of this microbial community to obtain as complete a picture as possible of how it functions as a whole, requires a combination of various traditional and cutting-edge bioinformatic, molecular, physiological, and culturing approaches. Isolates from this ecosystem, including Treponema primitia str. ZAS-1 and ZAS-2 as well as T. azotonutricium str. ZAS-9, have been significant resources for better understanding the termite system. While not all functions predicted by the genomes of these three isolates are demonstrated in vitro, these isolates do have the capacity for several metabolisms unique to spirochetes and critical to the termite system’s reliance upon lignocellulose. In this thesis, work culturing, enriching for, and isolating diverse microorganisms from the termite hindgut is discussed. Additionally, strategies of members of the termite hindgut microbial community to defend against O2-stress and to generate acetate, the “biofuel” of the termite system, are proposed. In particular, catechol 2,3-dioxygenase and other meta-cleavage catabolic pathway genes are described in the “anaerobic” termite hindgut spirochetes T. primitia str. ZAS-1 and ZAS-2, and the first evidence for aromatic ring cleavage in the phylum (division) Spirochetes is also presented. These results suggest that the potential for O2-dependent, yet nonrespiratory, metabolisms of plant-derived aromatics should be re-evaluated in termite hindgut communities. Potential future work is also illustrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.

Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.

The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.

Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.

Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conduction through TiO2 films of thickness 100 to 450 Å have been investigated. The samples were prepared by either anodization of Ti evaporation of TiO2, with Au or Al evaporated for contacts. The anodized samples exhibited considerable hysteresis due to electrical forming, however it was possible to avoid this problem with the evaporated samples from which complete sets of experimental results were obtained and used in the analysis. Electrical measurements included: the dependence of current and capacitance on dc voltage and temperature; the dependence of capacitance and conductance on frequency and temperature; and transient measurements of current and capacitance. A thick (3000 Å) evaporated TiO2 film was used for measuring the dielectric constant (27.5) and the optical dispersion, the latter being similar to that for rutile. An electron transmission diffraction pattern of a evaporated film indicated an essentially amorphous structure with a short range order that could be related to rutile. Photoresponse measurements indicated the same band gap of about 3 ev for anodized and evaporated films and reduced rutile crystals and gave the barrier energies at the contacts.

The results are interpreted in a self consistent manner by considering the effect of a large impurity concentration in the films and a correspondingly large ionic space charge. The resulting potential profile in the oxide film leads to a thermally assisted tunneling process between the contacts and the interior of the oxide. A general relation is derived for the steady state current through structures of this kind. This in turn is expressed quantitatively for each of two possible limiting types of impurity distributions, where one type gives barriers of an exponential shape and leads to quantitative predictions in c lose agreement with the experimental results. For films somewhat greater than 100 Å, the theory is formulated essentially in terms of only the independently measured barrier energies and a characteristic parameter of the oxide that depends primarily on the maximum impurity concentration at the contacts. A single value of this parameter gives consistent agreement with the experimentally observed dependence of both current and capacitance on dc voltage and temperature, with the maximum impurity concentration found to be approximately the saturation concentration quoted for rutile. This explains the relative insensitivity of the electrical properties of the films on the exact conditions of formation.