7 resultados para Consumer class
em CaltechTHESIS
Resumo:
In this study we investigate the existence, uniqueness and asymptotic stability of solutions of a class of nonlinear integral equations which are representations for some time dependent non- linear partial differential equations. Sufficient conditions are established which allow one to infer the stability of the nonlinear equations from the stability of the linearized equations. Improved estimates of the domain of stability are obtained using a Liapunov Functional approach. These results are applied to some nonlinear partial differential equations governing the behavior of nonlinear continuous dynamical systems.
Resumo:
In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.
Resumo:
The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.
The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Red fluorescent proteins (RFPs) have attracted significant engineering focus because of the promise of near infrared fluorescent proteins, whose light penetrates biological tissue, and which would allow imaging inside of vertebrate animals. The RFP landscape, which numbers ~200 members, is mostly populated by engineered variants of four native RFPs, leaving the vast majority of native RFP biodiversity untouched. This is largely due to the fact that native RFPs are obligate tetramers, limiting their usefulness as fusion proteins. Monomerization has imposed critical costs on these evolved tetramers, however, as it has invariably led to loss of brightness, and often to many other adverse effects on the fluorescent properties of the derived monomeric variants. Here we have attempted to understand why monomerization has taken such a large toll on Anthozoa class RFPs, and to outline a clear strategy for their monomerization. We begin with a structural study of the far-red fluorescence of AQ143, one of the furthest red emitting RFPs. We then try to separate the problem of stable and bright fluorescence from the design of a soluble monomeric β-barrel surface by engineering a hybrid protein (DsRmCh) with an oligomeric parent that had been previously monomerized, DsRed, and a pre-stabilized monomeric core from mCherry. This allows us to use computational design to successfully design a stable, soluble, fluorescent monomer. Next we took HcRed, which is a previously unmonomerized RFP that has far-red fluorescence (λemission = 633 nm) and attempted to monomerize it making use of lessons learned from DsRmCh. We engineered two monomeric proteins by pre-stabilizing HcRed’s core, then monomerizing in stages, making use of computational design and directed evolution techniques such as error-prone mutagenesis and DNA shuffling. We call these proteins mGinger0.1 (λem = 637 nm / Φ = 0.02) and mGinger0.2 (λem = 631 nm Φ = 0.04). They are the furthest red first generation monomeric RFPs ever developed, are significantly thermostabilized, and add diversity to a small field of far-red monomeric FPs. We anticipate that the techniques we describe will be facilitate future RFP monomerization, and that further core optimization of the mGingers may allow significant improvements in brightness.
Resumo:
The structure of the set ϐ(A) of all eigenvalues of all complex matrices (elementwise) equimodular with a given n x n non-negative matrix A is studied. The problem was suggested by O. Taussky and some aspects have been studied by R. S. Varga and B.W. Levinger.
If every matrix equimodular with A is non-singular, then A is called regular. A new proof of the P. Camion-A.J. Hoffman characterization of regular matrices is given.
The set ϐ(A) consists of m ≤ n closed annuli centered at the origin. Each gap, ɤ, in this set can be associated with a class of regular matrices with a (unique) permutation, π(ɤ). The association depends on both the combinatorial structure of A and the size of the aii. Let A be associated with the set of r permutations, π1, π2,…, πr, where each gap in ϐ(A) is associated with one of the πk. Then r ≤ n, even when the complement of ϐ(A) has n+1 components. Further, if π(ɤ) is the identity, the real boundary points of ɤ are eigenvalues of real matrices equimodular with A. In particular, if A is essentially diagonally dominant, every real boundary point of ϐ(A) is an eigenvalues of a real matrix equimodular with A.
Several conjectures based on these results are made which if verified would constitute an extension of the Perron-Frobenius Theorem, and an algebraic method is introduced which unites the study of regular matrices with that of ϐ(A).
Resumo:
Suppose that AG is a solvable group with normal subgroup G where (|A|, |G|) = 1. Assume that A is a class two odd p group all of whose irreducible representations are isomorphic to subgroups of extra special p groups. If pc ≠ rd + 1 for any c = 1, 2 and any prime r where r2d+1 divides |G| and if CG(A) = 1 then the Fitting length of G is bounded by the power of p dividing |A|.
The theorem is proved by applying a fixed point theorem to a reduction of the Fitting series of G. The fixed point theorem is proved by reducing a minimal counter example. IF R is an extra spec r subgroup of G fixed by A1, a subgroup of A, where A1 centralizes D(R), then all irreducible characters of A1R which are nontrivial on Z(R) are computed. All nonlinear characters of a class two p group are computed.