16 resultados para Pseudo-Lipchitzness
em CaltechTHESIS
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.
A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.
In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.
The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.
The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.
Resumo:
I. Existence and Structure of Bifurcation Branches
The problem of bifurcation is formulated as an operator equation in a Banach space, depending on relevant control parameters, say of the form G(u,λ) = 0. If dimN(G_u(u_O,λ_O)) = m the method of Lyapunov-Schmidt reduces the problem to the solution of m algebraic equations. The possible structure of these equations and the various types of solution behaviour are discussed. The equations are normally derived under the assumption that G^O_λεR(G^O_u). It is shown, however, that if G^O_λεR(G^O_u) then bifurcation still may occur and the local structure of such branches is determined. A new and compact proof of the existence of multiple bifurcation is derived. The linearized stability near simple bifurcation and "normal" limit points is then indicated.
II. Constructive Techniques for the Generation of Solution Branches
A method is described in which the dependence of the solution arc on a naturally occurring parameter is replaced by the dependence on a form of pseudo-arclength. This results in continuation procedures through regular and "normal" limit points. In the neighborhood of bifurcation points, however, the associated linear operator is nearly singular causing difficulty in the convergence of continuation methods. A study of the approach to singularity of this operator yields convergence proofs for an iterative method for determining the solution arc in the neighborhood of a simple bifurcation point. As a result of these considerations, a new constructive proof of bifurcation is determined.
Resumo:
This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.
In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.
In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.
In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.
Resumo:
There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.
The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.
The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.
Resumo:
Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.
In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.
Resumo:
This thesis consists of three parts. Chapter 2 deals with the dynamic buckling behavior of steel braces under cyclic axial end displacement. Braces under such a loading condition belong to a class of "acceleration magnifying" structural components, in which a small motion at the loading points can cause large internal acceleration and inertia. This member-level inertia is frequently ignored in current studies of braces and braced structures. This chapter shows that, under certain conditions, the inclusion of the member-level inertia can lead to brace behavior fundamentally different from that predicted by the quasi-static method. This result is to have significance in the correct use of the quasi-static, pseudo-dynamic and static condensation methods in the simulation of braces or braced structures under dynamic loading. The strain magnitude and distribution in the braces are also studied in this chapter.
Chapter 3 examines the effect of column uplift on the earthquake response of braced steel frames and explores the feasibility of flexible column-base anchoring. It is found that fully anchored braced-bay columns can induce extremely large internal forces in the braced-bay members and their connections, thus increasing the risk of failures observed in recent earthquakes. Flexible braced-bay column anchoring can significantly reduce the braced bay member force, but at the same time also introduces large story drift and column uplift. The pounding of an uplifting column with its support can result in very high compressive axial force.
Chapter 4 conducts a comparative study on the effectiveness of a proposed non-buckling bracing system and several conventional bracing systems. The non-buckling bracing system eliminates buckling and thus can be composed of small individual braces distributed widely in a structure to reduce bracing force concentration and increase redundancy. The elimination of buckling results in a significantly more effective bracing system compared with the conventional bracing systems. Among the conventional bracing systems, bracing configurations and end conditions for the bracing members affect the effectiveness.
The studies in Chapter 3 and Chapter 4 also indicate that code-designed conventionally braced steel frames can experience unacceptably severe response under the strong ground motions recorded during the recent Northridge and Kobe earthquakes.
Resumo:
We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.
1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.
2) MHD turbulence is anisotropic with energy cascading more rapidly along k⊥ than along k∥, where k⊥ and k∥ refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k⊥ such that excited modes are confined inside a cone bounded by k∥ ∝ kγ⊥ where γ less than 1. The opening angle of the cone, θ(k⊥) ∝ k-(1-γ)⊥, defines the scale dependent anisotropy.
3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k⊥)≪1.
4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k⊥) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.
5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.
6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.
7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.
8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k⊥) ∝ k-∝⊥, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.
Resumo:
This study proposes a wastewater electrolysis cell (WEC) for on-site treatment of human waste coupled with decentralized molecular H2 production. The core of the WEC includes mixed metal oxides anodes functionalized with bismuth doped TiO2 (BiOx/TiO2). The BiOx/TiO2 anode shows reliable electro-catalytic activity to oxidize Cl- to reactive chlorine species (RCS), which degrades environmental pollutants including chemical oxygen demand (COD), protein, NH4+, urea, and total coliforms. The WEC experiments for treatment of various kinds of synthetic and real wastewater demonstrate sufficient water quality of effluent for reuse for toilet flushing and environmental purposes. Cathodic reduction of water and proton on stainless steel cathodes produced molecular H2 with moderate levels of current and energy efficiency. This thesis presents a comprehensive environmental analysis together with kinetic models to provide an in-depth understanding of reaction pathways mediated by the RCS and the effects of key operating parameters. The latter part of this thesis is dedicated to bilayer hetero-junction anodes which show enhanced generation efficiency of RCS and long-term stability.
Chapter 2 describes the reaction pathway and kinetics of urea degradation mediated by electrochemically generated RCS. The urea oxidation involves chloramines and chlorinated urea as reaction intermediates, for which the mass/charge balance analysis reveals that N2 and CO2 are the primary products. Chapter 3 investigates direct-current and photovoltaic powered WEC for domestic wastewater treatment, while Chapter 4 demonstrates the feasibility of the WEC to treat model septic tank effluents. The results in Chapter 2 and 3 corroborate the active roles of chlorine radicals (Cl•/Cl2-•) based on iR-compensated anodic potential (thermodynamic basis) and enhanced pseudo-first-order rate constants (kinetic basis). The effects of operating parameters (anodic potential and [Cl-] in Chapter 3; influent dilution and anaerobic pretreatment in Chapter 4) on the rate and current/energy efficiency of pollutants degradation and H2 production are thoroughly discussed based on robust kinetic models. Chapter 5 reports the generation of RCS on Ir0.7Ta0.3Oy/BixTi1-xOz hetero-junction anodes with enhanced rate, current efficiency, and long-term stability compared to the Ir0.7Ta0.3Oy anode. The effects of surficial Bi concentration are interrogated, focusing on relative distributions between surface-bound hydroxyl radical and higher oxide.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.
An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).
The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.
A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.
Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.
Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.
Resumo:
The buckling of axially compressed cylindrical shells and externally pressurized spherical shells is extremely sensitive to even very small geometric imperfections. In practice this issue is addressed by either using overly conservative knockdown factors, while keeping perfect axial or spherical symmetry, or adding closely and equally spaced stiffeners on shell surface. The influence of imperfection-sensitivity is mitigated, but the shells designed from these approaches are either too heavy or very expensive and are still sensitive to imperfections. Despite their drawbacks, these approaches have been used for more than half a century.
This thesis proposes a novel method to design imperfection-insensitive cylindrical shells subject to axial compression. Instead of following the classical paths, focused on axially symmetric or high-order rotationally symmetric cross-sections, the method in this thesis adopts optimal symmetry-breaking wavy cross-sections (wavy shells). The avoidance of imperfection sensitivity is achieved by searching with an evolutionary algorithm for smooth cross-sectional shapes that maximize the minimum among the buckling loads of geometrically perfect and imperfect wavy shells. It is found that the shells designed through this approach can achieve higher critical stresses and knockdown factors than any previously known monocoque cylindrical shells. It is also found that these shells have superior mass efficiency to almost all previously reported stiffened shells.
Experimental studies on a design of composite wavy shell obtained through the proposed method are presented in this thesis. A method of making composite wavy shells and a photogrametry technique of measuring full-field geometric imperfections have been developed. Numerical predictions based on the measured geometric imperfections match remarkably well with the experiments. Experimental results confirm that the wavy shells are not sensitive to imperfections and can carry axial compression with superior mass efficiency.
An efficient computational method for the buckling analysis of corrugated and stiffened cylindrical shells subject to axial compression has been developed in this thesis. This method modifies the traditional Bloch wave method based on the stiffness matrix method of rotationally periodic structures. A highly efficient algorithm has been developed to implement the modified Bloch wave method. This method is applied in buckling analyses of a series of corrugated composite cylindrical shells and a large-scale orthogonally stiffened aluminum cylindrical shell. Numerical examples show that the modified Bloch wave method can achieve very high accuracy and require much less computational time than linear and nonlinear analyses of detailed full finite element models.
This thesis presents parametric studies on a series of externally pressurized pseudo-spherical shells, i.e., polyhedral shells, including icosahedron, geodesic shells, and triambic icosahedra. Several optimization methods have been developed to further improve the performance of pseudo-spherical shells under external pressure. It has been shown that the buckling pressures of the shell designs obtained from the optimizations are much higher than the spherical shells and not sensitive to imperfections.
Resumo:
In Part I, we construct a symmetric stress-energy-momentum pseudo-tensor for the gravitational fields of Brans-Dicke theory, and use this to establish rigorously conserved integral expressions for energy-momentum Pi and angular momentum Jik. Application of the two-dimensional surface integrals to the exact static spherical vacuum solution of Brans leads to an identification of our conserved mass with the active gravitational mass. Application to the distant fields of an arbitrary stationary source reveals that Pi and Jik have the same physical interpretation as in general relativity. For gravitational waves whose wavelength is small on the scale of the background radius of curvature, averaging over several wavelengths in the Brill-Hartle-Isaacson manner produces a stress-energy-momentum tensor for gravitational radiation which may be used to calculate the changes in Pi and Jik of their source.
In Part II, we develop strong evidence in favor of a conjecture by Penrose--that, in the Brans-Dicke theory, relativistic gravitational collapse in three dimensions produce black holes identical to those of general relativity. After pointing out that any black hole solution of general relativity also satisfies Brans-Dicke theory, we establish the Schwarzschild and Kerr geometries as the only possible spherical and axially symmetric black hole exteriors, respectively. Also, we show that a Schwarzschild geometry is necessarily formed in the collapse of an uncharged sphere.
Appendices discuss relationships among relativistic gravity theories and an example of a theory in which black holes do not exist.
Resumo:
I. Introductory Remarks
A brief discussion of the overall organization of the thesis is presented along with a discussion of the relationship between this thesis and previous work on the spectroscopic properties of benzene.
II. Radiationless Transitions and Line broadening
Radiationless rates have been calculated for the 3B1u→1A1g transitions of benzene and perdeuterobenzene as well as for the 1B2u→1A1g transition of benzene. The rates were calculated using a model that considers the radiationless transition as a tunneling process between two multi-demensional potential surfaces and assuming both harmonic and anharmonic vibrational potentials. Whenever possible experimental parameters were used in the calculation. To this end we have obtained experimental values for the anharmonicities of the carbon-carbon and carbon-hydrogen vibrations and the size of the lowest triplet state of benzene. The use of the breakdown of the Born-Oppenheimer approximation in describing radiationless transitions is critically examined and it is concluded that Herzberg-Teller vibronic coupling is 100 times more efficient at inducing radiationless transitions.
The results of the radiationless transition rate calculation are used to calculate line broadening in several of the excited electronic states of benzene. The calculated line broadening in all cases is in qualitative agreement with experimental line widths.
III. 3B1u←1A1g Absorption Spectra
The 3B1u←1A1g absorption spectra of C6H6 and C6D6 at 4.2˚K have been obtained at high resolution using the phosphorescence photoexcitation method. The spectrum exhibits very clear evidence of a pseudo-Jahn-Teller distortion of the normally hexagonal benzene molecule upon excitation to the triplet state. Factor group splitting of the 0 – 0 and 0 – 0 + v exciton bands have also been observed. The position of the mean of the 0 – 0 exciton band of C6H6 when compared to the phosphorescence origin of a C6H6 guest in a C6D6 host crystal indicates that the “static” intermolecular interactions between guest and hose are different for C6H6 and C6D6. Further investigation of this difference using the currently accepted theory of isotopic mixed crystals indicates that there is a 2cm-1 shift of the ideal mixed crystal level per hot deuterium atom. This shift is observed for both the singlet and triplet states of benzene.
IV. 3E1u←1A1g, Absorption Spectra
The 3E1u←1A1g absorption spectra of C6H6 and C6D6 at 4.2˚K have been obtained using the phosphorescence photoexcitation technique. In both cases the spectrum is broad and structureless as would be expected from the line broadening calculations.
Resumo:
Acetyltransferases and deacetylases catalyze the addition and removal, respectively, of acetyl groups to the epsilon-amino group of protein lysine residues. This modification can affect the function of a protein through several means, including the recruitment of specific binding partners called acetyl-lysine readers. Acetyltransferases, deacetylases, and acetyl-lysine readers have emerged as crucial regulators of biological processes and prominent targets for the treatment of human disease. This work describes a combination of structural, biochemical, biophysical, cell-biological, and organismal studies undertaken on a set of proteins that cumulatively include all steps of the acetylation process: the acetyltransferase MEC-17, the deacetylase SIRT1, and the acetyl-lysine reader DPF2. Tubulin acetylation by MEC-17 is associated with stable, long-lived microtubule structures. We determined the crystal structure of the catalytic domain of human MEC-17 in complex with the cofactor acetyl-CoA. The structure in combination with an extensive enzymatic analysis of MEC-17 mutants identified residues for cofactor and substrate recognition and activity. A large, evolutionarily conserved hydrophobic surface patch distal to the active site was shown to be necessary for catalysis, suggesting that specificity is achieved by interactions with the alpha-tubulin substrate that extend outside of the modified surface loop. Experiments in C. elegans showed that while MEC-17 is required for touch sensitivity, MEC-17 enzymatic activity is dispensible for this behavior. SIRT1 deacetylates a wide range of substrates, including p53, NF-kappaB, FOXO transcription factors, and PGC-1-alpha, with roles in cellular processes ranging from energy metabolism to cell survival. SIRT1 activity is uniquely controlled by a C-terminal regulatory segment (CTR). Here we present crystal structures of the catalytic domain of human SIRT1 in complex with the CTR in an apo form and in complex with a cofactor and a pseudo-substrate peptide. The catalytic domain adopts the canonical sirtuin fold. The CTR forms a beta-hairpin structure that complements the beta-sheet of the NAD^+-binding domain, covering an essentially invariant, hydrophobic surface. A comparison of the apo and cofactor bound structures revealed conformational changes throughout catalysis, including a rotation of a smaller subdomain with respect to the larger NAD^+-binding subdomain. A biochemical analysis identified key residues in the active site, an inhibitory role for the CTR, and distinct structural features of the CTR that mediate binding and inhibition of the SIRT1 catalytic domain. DPF2 represses myeloid differentiation in acute myelogenous leukemia. Finally, we solved the crystal structure of the tandem PHD domain of human DPF2. We showed that DPF2 preferentially binds H3 tail peptides acetylated at Lys14, and binds H4 tail peptides with no preference for acetylation state. Through a structural and mutational analysis we identify the molecular basis of histone recognition. We propose a model for the role of DPF2 in AML and identify the DPF2 tandem PHD finger domain as a promising novel target for anti-leukemia therapeutics.