20 resultados para Aggregate shocks

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A person living in an industrialized society has almost no choice but to receive information daily with negative implications for himself or others. His attention will often be drawn to the ups and downs of economic indicators or the alleged misdeeds of leaders and organizations. Reacting to new information is central to economics, but economics typically ignores the affective aspect of the response, for example, of stress or anger. These essays present the results of considering how the affective aspect of the response can influence economic outcomes.

The first chapter presents an experiment in which individuals were presented with information about various non-profit organizations and allowed to take actions that rewarded or punished those organizations. When social interaction was introduced into this environment an asymmetry between rewarding and punishing appeared. The net effects of punishment became greater and more variable, whereas the effects of reward were unchanged. The individuals were more strongly influenced by negative social information and used that information to target unpopular organizations. These behaviors contributed to an increase in inequality among the outcomes of the organizations.

The second and third chapters present empirical studies of reactions to negative information about local economic conditions. Economic factors are among the most prevalent stressors, and stress is known to have numerous negative effects on health. These chapters document localized, transient effects of the announcement of information about large-scale job losses. News of mass layoffs and shut downs of large military bases are found to decrease birth weights and gestational ages among babies born in the affected regions. The effect magnitudes are close to those estimated in similar studies of disasters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The general theory of Whitham for slowly-varying non-linear wavetrains is extended to the case where some of the defining partial differential equations cannot be put into conservation form. Typical examples are considered in plasma dynamics and water waves in which the lack of a conservation form is due to dissipation; an additional non-conservative element, the presence of an external force, is treated for the plasma dynamics example. Certain numerical solutions of the water waves problem (the Korteweg-de Vries equation with dissipation) are considered and compared with perturbation expansions about the linearized solution; it is found that the first correction term in the perturbation expansion is an excellent qualitative indicator of the deviation of the dissipative decay rate from linearity.

A method for deriving necessary and sufficient conditions for the existence of a general uniform wavetrain solution is presented and illustrated in the plasma dynamics problem. Peaking of the plasma wave is demonstrated, and it is shown that the necessary and sufficient existence conditions are essentially equivalent to the statement that no wave may have an amplitude larger than the peaked wave.

A new type of fully non-linear stability criterion is developed for the plasma uniform wavetrain. It is shown explicitly that this wavetrain is stable in the near-linear limit. The nature of this new type of stability is discussed.

Steady shock solutions are also considered. By a quite general method, it is demonstrated that the plasma equations studied here have no steady shock solutions whatsoever. A special type of steady shock is proposed, in which a uniform wavetrain joins across a jump discontinuity to a constant state. Such shocks may indeed exist for the Korteweg-de Vries equation, but are barred from the plasma problem because entropy would decrease across the shock front.

Finally, a way of including the Landau damping mechanism in the plasma equations is given. It involves putting in a dissipation term of convolution integral form, and parallels a similar approach of Whitham in water wave theory. An important application of this would be towards resolving long-standing difficulties about the "collisionless" shock.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microbial sulfur cycling communities were investigated in two methane-rich ecosystems, terrestrial mud volcanoes (TMVs) and marine methane seeps, in order to investigate niches and processes that would likely be central to the functioning of these crucial ecosystems. Terrestrial mud volcanoes represent geochemically diverse habitats with varying sulfur sources and yet sulfur-cycling in these environments remains largely unexplored. Here we characterized the sulfur-metabolizing microorganisms and activity in 4 TMVs in Azerbaijan, supporting the presence of active sulfur-oxidizing and sulfate-reducing guilds in all 4 TMVs across a range of physiochemical conditions, with diversity of these guilds being unique to each TMV. We also found evidence for the anaerobic oxidation of methane coupled to sulfate reduction, a process which we explored further in the more tractable marine methane seeps. Diverse associations between methanotrophic archaea (ANME) and sulfate-reducing bacterial groups (SRB) often co-occur in marine methane seeps, however the ecophysiology of these different symbiotic associations has not been examined. Using a combination of molecular, geochemical and fluorescence in situ hybridization coupled to nano-scale secondary ion mass spectrometry (FISH-NanoSIMS) analyses of in situ seep sediments and methane-amended sediment incubations from diverse locations, we show that the unexplained diversity in SRB associated with ANME cells can be at least partially explained by preferential nitrate utilization by one particular partner, the seepDBB. This discovery reveals that nitrate is likely an important factor in community structuring and diversity in marine methane seep ecosystems. The thesis concludes with a study of the dynamics between ANME and their associated SRB partners. We inhibited sulfate reduction and followed the metabolic processes of the community as well as the effect of ANME/SRB aggregate composition and growth on a cellular level by tracking 15N substrate incorporation into biomass using FISH-NanoSIMS. We revealed that while sulfate-reducing bacteria gradually disappeared over time in incubations with an SRB inhibitor, the ANME archaea persisted in the form of ANME-only aggregates, which are capable of little to no growth when sulfate reduction is inhibited. These data suggest ANME are not able to synthesize new proteins when sulfate reduction is inhibited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A unique chloroplast Signal Recognition Particle (SRP) in green plants is primarily dedicated to the post-translational targeting of light harvesting chlorophyll-a/b binding (LHC) proteins. Our study of the thermodynamics and kinetics of the GTPases of the system demonstrates that GTPase complex assembly and activation are highly coupled in the chloroplast GTPases, suggesting they may forego the GTPase activation step as a key regulatory point. This reflects adaptations of the chloroplast SRP to the delivery of their unique substrate protein. Devotion to one highly hydrophobic family of proteins also may have allowed the chloroplast SRP system to evolve an efficient chaperone in the cpSRP43 subunit. To understand the mechanism of disaggregation, we showed that LHC proteins form micellar, disc-shaped aggregates that present a recognition motif (L18) on the aggregate surface. Further molecular genetic and structure-activity analyses reveal that the action of cpSRP43 can be dissected into two steps: (i) initial recognition of L18 on the aggregate surface; and (ii) aggregate remodeling, during which highly adaptable binding interactions of cpSRP43 with hydrophobic transmembrane domains of the substrate protein compete with the packing interactions within the aggregate. We also tested the adaptability of cpSRP43 for alternative substrates, specifically in attempts to improve membrane protein expression and inhibition of amyloid beta fibrillization. These preliminary results attest to cpSRP43’s potential as a molecular chaperone and provides the impetus for further engineering endeavors to address problems that stem from protein aggregation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.

To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.

I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.

I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.

With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.

Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.

Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.

While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.

A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.

The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.

Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.

The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.

Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.

PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.