12 resultados para Expected Cost
em CaltechTHESIS
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Resumo:
This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.
Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.
However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.
It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.
With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Successful management has been defined as the art of spending money wisely and well. Profits may not be the end and all of business but they are certainly the test of practicality. Everything worth while should pay for itself. One proposal is no better than another, except as in the working-out it yields better results.
Resumo:
This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.
Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.
In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.
Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.
Resumo:
We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.
In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.
We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.
In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.
In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.
Resumo:
Fast radio bursts (FRBs), a novel type of radio pulse, whose physics is not yet understood at all. Only a handful of FRBs had been detected when we started this project. Taking account of the scant observations, we put physical constraints on FRBs. We excluded proposals of a galactic origin for their extraordinarily high dispersion measures (DM), in particular stellar coronas and HII regions. Therefore our work supports an extragalactic origin for FRBs. We show that the resolved scattering tail of FRB 110220 is unlikely to be due to propagation through the intergalactic plasma. Instead the scattering is probably caused by the interstellar medium in the FRB's host galaxy, and indicates that this burst sits in the central region of that galaxy. Pulse durations of order $\ms$ constrain source sizes of FRBs implying enormous brightness temperatures and thus coherent emission. Electric fields near FRBs at cosmological distances would be so strong that they could accelerate free electrons from rest to relativistic energies in a single wave period. When we worked on FRBs, it was unclear whether they were genuine astronomical signals as distinct from `perytons', clearly terrestrial radio bursts, sharing some common properties with FRBs. Recently, in April 2015, astronomers discovered that perytons were emitted by microwave ovens. Radio chirps similar to FRBs were emitted when their doors opened while they were still heating. Evidence for the astronomical nature of FRBs has strengthened since our paper was published. Some bursts have been found to show linear and circular polarizations and Faraday rotation of the linear polarization has also been detected. I hope to resume working on FRBs in the near future. But after we completed our FRB paper, I decided to pause this project because of the lack of observational constraints.
The pulsar triple system, J0733+1715, has its orbital parameters fitted to high accuracy owing to the precise timing of the central $\ms$ pulsar. The two orbits are highly hierarchical, namely $P_{\mathrm{orb,1}}\ll P_{\mathrm{orb,2}}$, where 1 and 2 label the inner and outer white dwarf (WD) companions respectively. Moreover, their orbital planes almost coincide, providing a unique opportunity to study secular interaction associated purely with eccentricity beyond the solar system. Secular interaction only involves effect averaged over many orbits. Thus each companion can be represented by an elliptical wire with its mass distributed inversely proportional to its local orbital speed. Generally there exists a mutual torque, which vanishes only when their apsidal lines are parallel or anti-parallel. To maintain either mode, the eccentricity ratio, $e_1/e_2$, must be of the proper value, so that both apsidal lines precess together. For J0733+1715, $e_1\ll e_2$ for the parallel mode, while $e_1\gg e_2$ for the anti-parallel one. We show that the former precesses $\sim 10$ times slower than the latter. Currently the system is dominated by the parallel mode. Although only a little anti-parallel mode survives, both eccentricities especially $e_1$ oscillate on $\sim 10^3\yr$ timescale. Detectable changes would occur within $\sim 1\yr$. We demonstrate that the anti-parallel mode gets damped $\sim 10^4$ times faster than its parallel brother by any dissipative process diminishing $e_1$. If it is the tidal damping in the inner WD, we proceed to estimate its tidal quantity parameter ($Q$) to be $\sim 10^6$, which was poorly constrained by observations. However, tidal damping may also happen during the preceding low-mass X-ray binary (LMXB) phase or hydrogen thermal nuclear flashes. But, in both cases, the inner companion fills its Roche lobe and probably suffers mass/angular momentum loss, which might cause $e_1$ to grow rather than decay.
Several pairs of solar system satellites occupy mean motion resonances (MMRs). We divide these into two groups according to their proximity to exact resonance. Proximity is measured by the existence of a separatrix in phase space. MMRs between Io-Europa, Europa-Ganymede and Enceladus-Dione are too distant from exact resonance for a separatrix to appear. A separatrix is present only in the phase spaces of the Mimas-Tethys and Titan-Hyperion MMRs and their resonant arguments are the only ones to exhibit substantial librations. When a separatrix is present, tidal damping of eccentricity or inclination excites overstable librations that can lead to passage through resonance on the damping timescale. However, after investigation, we conclude that the librations in the Mimas-Tethys and Titan-Hyperion MMRs are fossils and do not result from overstability.
Rubble piles are common in the solar system. Monolithic elements touch their neighbors in small localized areas. Voids occupy a significant fraction of the volume. In a fluid-free environment, heat cannot conduct through voids; only radiation can transfer energy across them. We model the effective thermal conductivity of a rubble pile and show that it is proportional the square root of the pressure, $P$, for $P\leq \epsy^3\mu$ where $\epsy$ is the material's yield strain and $\mu$ its shear modulus. Our model provides an excellent fit to the depth dependence of the thermal conductivity in the top $140\,\mathrm{cm}$ of the lunar regolith. It also offers an explanation for the low thermal inertias of rocky asteroids and icy satellites. Lastly, we discuss how rubble piles slow down the cooling of small bodies such as asteroids.
Electromagnetic (EM) follow-up observations of gravitational wave (GW) events will help shed light on the nature of the sources, and more can be learned if the EM follow-ups can start as soon as the GW event becomes observable. In this paper, we propose a computationally efficient time-domain algorithm capable of detecting gravitational waves (GWs) from coalescing binaries of compact objects with nearly zero time delay. In case when the signal is strong enough, our algorithm also has the flexibility to trigger EM observation {\it before} the merger. The key to the efficiency of our algorithm arises from the use of chains of so-called Infinite Impulse Response (IIR) filters, which filter time-series data recursively. Computational cost is further reduced by a template interpolation technique that requires filtering to be done only for a much coarser template bank than otherwise required to sufficiently recover optimal signal-to-noise ratio. Towards future detectors with sensitivity extending to lower frequencies, our algorithm's computational cost is shown to increase rather insignificantly compared to the conventional time-domain correlation method. Moreover, at latencies of less than hundreds to thousands of seconds, this method is expected to be computationally more efficient than the straightforward frequency-domain method.