913 resultados para platelet function tests
Resumo:
The specific aspects of cognition contributing to balance and gait have not been clarified in people with Parkinson’s disease (PD). Twenty PD participants and twenty age- and gender-matched healthy controls were assessed on cognition and clinical mobility tests. General cognition was assessed with the Mini Mental State Exam and the Addenbrooke’s Cognitive Exam. Executive function was evaluated using the Trail Making Tests (TMT-A and TMT-B) and a computerized cognitive battery which included a series of choice reaction time (CRT) tests. Clinical gait and balance measures included the Tinetti, Timed Up & Go, Berg Balance and Functional Reach tests. PD participants performed significantly worse than the controls on the tests of cognitive and executive function, balance and gait. PD participants took longer on Trail Making Tests, CRT-Location and CRT-Colour (inhibition response). Furthermore, executive function, particularly longer times on CRT-Distracter and greater errors on the TMT-B were associated with worse balance and gait performance in the PD group. Measures of general cognition were not associated with balance and gait measures in either group. For PD participants, attention and executive function were impaired. Components of executive function, particularly those involving inhibition response and distracters, were associated with poorer balance and gait performance in PD.
Resumo:
Lead compounds are known genotoxicants, principally affecting the integrity of chromosomes. Lead chloride and lead acetate induced concentration-dependent increases in micronucleus frequency in V79 cells, starting at 1.1 μM lead chloride and 0.05 μM lead acetate. The difference between the lead salts, which was expected based on their relative abilities to form complex acetato-cations, was confirmed in an independent experiment. CREST analyses of the micronuclei verified that lead chloride and acetate were predominantly aneugenic (CREST-positive response), which was consistent with the morphology of the micronuclei (larger micronuclei, compared with micronuclei induced by a clastogenic mechanism). The effects of high concentrations of lead salts on the microtubule network of V79 cells were also examined using immunofluorescence staining. The dose effects of these responses were consistent with the cytotoxicity of lead(II), as visualized in the neutral-red uptake assay. In a cell-free system, 20-60 μM lead salts inhibited tubulin assembly dose-dependently. The no-observed-effect concentration of lead(II) in this assay was 10 μM. This inhibitory effect was interpreted as a shift of the assembly/disassembly steady-state toward disassembly, e.g., by reducing the concentration of assembly-competent tubulin dimers. The effects of lead salts on microtubule-associated motor-protein functions were studied using a kinesin-gliding assay that mimics intracellular transport processes in vitro by quantifying the movement of paclitaxel-stabilized microtubules across a kinesin-coated glass surface. There was a dose-dependent effect of lead nitrate on microtubule motility. Lead nitrate affected the gliding velocities of microtubules starting at concentrations above 10 μM and reached half-maximal inhibition of motility at about 50 μM. The processes reported here point to relevant interactions of lead with tubulin and kinesin at low dose levels.
Resumo:
This study investigated the hypothesis that the chromosomal genotoxicity of inorganic mercury results from interaction(s) with cytoskeletal proteins. Effects of Hg2+ salts on functional activities of tubulin and kinesin were investigated by determining tubulin assembly and kinesin-driven motility in cell-free systems. Hg2+ inhibits microtubule assembly at concentrations above 1 μM, and inhibition is complete at about 10 μM. In this range, the tubulin assembly is fully (up to 6 μM) or partially (∼6-10 μM) reversible. The inhibition of tubulin assembly by mercury is independent of the anion, chloride or nitrate. The no-observed-effect- concentration for inhibition of microtubule assembly in vitro was 1 μM Hg2+, the IC50 5.8 μM. Mercury(II) salts at the IC 50 concentrations partly inhibiting tubulin assembly did not cause the formation of aberrant microtubule structures. Effects of mercury salts on the functionality of the microtubule motility apparatus were studied with the motor protein kinesin. By using a "gliding assay" mimicking intracellular movement and transport processes in vitro, HgCl2 affected the gliding velocity of paclitaxel-stabilised microtubules in a clear dose-dependent manner. An apparent effect is detected at a concentration of 0.1 μM and a complete inhibition is reached at 1 μM. Cytotoxicity of mercury chloride was studied in V79 cells using neutral red uptake, showing an influence above 17 μM HgCl2. Between 15 and 20 μM HgCl2 there was a steep increase in cell toxicity. Both mercury chloride and mercury nitrate induced micronuclei concentration-dependently, starting at concentrations above 0.01 μM. CREST analyses on micronuclei formation in V79 cells demonstrated both clastogenic (CREST-negative) and aneugenic effects of Hg2+, with some preponderance of aneugenicity. A morphological effect of high Hg2+ concentrations (100 μM HgCl2) on the microtubule cytoskeleton was verified in V79 cells by immuno-fluorescence staining. The overall data are consistent with the concept that the chromosomal genotoxicity could be due to interaction of Hg2+ with the motor protein kinesin mediating cellular transport processes. Interactions of Hg 2+ with the tubulin shown by in vitro investigations could also partly influence intracellular microtubule functions leading, together with the effects on the kinesin, to an impaired chromosome distribution as shown by the micronucleus test.
Resumo:
Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag authentication protocol. Apart from some statistical tests for randomness by the designers themselves, Tav-128 has not undergone any other thorough security analysis. Based on these tests, the designers claimed that Tav-128 does not posses any trivial weaknesses. In this article, we carry out the first third party security analysis of Tav-128 and show that this hash function is neither collision resistant nor second preimage resistant. Firstly, we show a practical collision attack on Tav-128 having a complexity of 237 calls to the compression function and produce message pairs of arbitrary length which produce the same hash value under this hash function. We then show a second preimage attack on Tav-128 which succeeds with a complexity of 262 calls to the compression function. Finally, we study the constituent functions of Tav-128 and show that the concatenation of nonlinear functions A and B produces a 64-bit permutation from 32-bit messages. This could be a useful light weight primitive for future RFID protocols.
Resumo:
The transfusion of platelet concentrates (PCs) is widely used to treat thrombocytopenia and severe trauma. Ex vivo storage of PCs is associated with a storage lesion characterized by partial platelet activation and the release of soluble mediators, such as soluble CD40 ligand (sCD40L), RANTES, and interleukin (IL)-8. An in vitro whole blood culture transfusion model was employed to assess whether mediators present in PC supernatants (PC-SNs) modulated dendritic cell (DC)-specific inflammatory responses (intracellular staining) and the overall inflammatory response (cytometric bead array). Lipopolysaccharide (LPS) was included in parallel cultures to model the impact of PC-SNs on cell responses following toll-like receptor-mediated pathogen recognition. The impact of both the PC dose (10%, 25%) and ex vivo storage period was investigated [day 2 (D2), day 5 (D5), day 7 (D7)]. PC-SNs alone had minimal impact on DC-specific inflammatory responses and the overall inflammatory response. However, in the presence of LPS, exposure to PC-SNs resulted in a significant dose associated suppression of the production of DC IL-12, IL-6, IL-1a, tumor necrosis factor-a (TNF-a), and macrophage inflammatory protein (MIP)-1b and storage-associated suppression of the production of DC IL-10, TNF-a, and IL-8. For the overall inflammatory response, IL-6, TNF-a, MIP-1a, MIP-1b, and inflammatory protein (IP)-10 were significantly suppressed and IL-8, IL-10, and IL-1b significantly increased following exposure to PC-SNs in the presence of LPS. These data suggest that soluble mediators present in PCs significantly suppress DC function and modulate the overall inflammatory response, particularly in the presence of an infectious stimulus. Given the central role of DCs in the initiation and regulation of the immune response, these results suggest that modulation of the DC inflammatory profile is a probable mechanism contributing to transfusion-related complications.
Resumo:
Indirect and qualitative tests of pancreatic function are commonly used to screen patients with cystic fibrosis for pancreatic insufficiency. In an attempt to develop a more quantitative assessment, we compared the usefulness of measuring serum pancreatic lipase using a newly developed enzyme-linked immunosorbent immunoassay with that of cationic trypsinogen using a radioimmunoassay in the assessment of exocrine pancreatic function in patients with cystic fibrosis. Previously, we have shown neither lipase nor trypsinogen to be of use in assessing pancreatic function prior to 5 years of age because the majority of patients with cystic fibrosis in early infancy have elevated serum levels regardless of pancreatic function. Therefore, we studied 77 patients with cystic fibrosis older than 5 years of age, 41 with steatorrhea and 36 without steatorrhea. In addition, 28 of 77 patients consented to undergo a quantitative pancreatic stimulation test. There was a significant difference between the steatorrheic and nonsteatorrheic patients with the steatorrheic group having lower lipase and trypsinogen values than the nonsteatorrheic group (P < .001). Sensitivities and specificities in detecting steatorrhea were 95% and 86%, respectively, for lipase and 93% and 92%, respectively, for trypsinogen. No correlations were found between the serum levels of lipase and trypsinogen and their respective duodenal concentrations because of abnormally high serum levels of both enzymes found in some nonsteatorrheic patients. We conclude from this study that both serum lipase and trypsinogen levels accurately detect steatorrhea in patients with cystic fibrosis who are older than 5 years but are imprecise indicators of specific pancreatic exocrine function above the level needed for normal fat absorption.
Resumo:
Models are abstractions of reality that have predetermined limits (often not consciously thought through) on what problem domains the models can be used to explore. These limits are determined by the range of observed data used to construct and validate the model. However, it is important to remember that operating the model beyond these limits, one of the reasons for building the model in the first place, potentially brings unwanted behaviour and thus reduces the usefulness of the model. Our experience with the Agricultural Production Systems Simulator (APSIM), a farming systems model, has led us to adapt techniques from the disciplines of modelling and software development to create a model development process. This process is simple, easy to follow, and brings a much higher level of stability to the development effort, which then delivers a much more useful model. A major part of the process relies on having a range of detailed model tests (unit, simulation, sensibility, validation) that exercise a model at various levels (sub-model, model and simulation). To underline the usefulness of testing, we examine several case studies where simulated output can be compared with simple relationships. For example, output is compared with crop water use efficiency relationships gleaned from the literature to check that the model reproduces the expected function. Similarly, another case study attempts to reproduce generalised hydrological relationships found in the literature. This paper then describes a simple model development process (using version control, automated testing and differencing tools), that will enhance the reliability and usefulness of a model.
Resumo:
Background: Opiod dependence is a chronic severe brain disorder associated with enormous health and social problems. The relapse back to opioid abuse is very high especially in early abstinence, but neuropsychological and neurophysiological deficits during opioid abuse or soon after cessation of opioids are scarcely investigated. Also the structural brain changes and their correlations with the length of opioid abuse or abuse onset age are not known. In this study the cognitive functions, neural basis of cognitive dysfunction, and brain structural changes was studied in opioid-dependent patients and in age and sex matched healthy controls. Materials and methods: All subjects participating in the study, 23 opioid dependents of whom, 15 were also benzodiazepine and five cannabis co-dependent and 18 healthy age and sex matched controls went through Structured Clinical Interviews (SCID) to obtain DSM-IV axis I and II diagnosis and to exclude psychiatric illness not related to opioid dependence or personality disorders. Simultaneous magnetoencephalography (MEG) and electroencephalography (EEG) measurements were done on 21 opioid-dependent individuals on the day of hospitalization for withdrawal therapy. The neural basis of auditory processing was studied and pre-attentive attention and sensory memory were investigated. During the withdrawal 15 opioid-dependent patients participated in neuropsychological tests, measuring fluid intelligence, attention and working memory, verbal and visual memory, and executive functions. Fifteen healthy subjects served as controls for the MEG-EEG measurements and neuropsychological assessment. The brain magnetic resonance imaging (MRI) was obtained from 17 patients after approximately two weeks abstinence, and from 17 controls. The areas of different brain structures and the absolute and relative volumes of cerebrum, cerebral white and gray matter, and cerebrospinal fluid (CSF) spaces were measured and the Sylvian fissure ratio (SFR) and bifrontal ratio were calculated. Also correlation between the cerebral measures and neuropsychological performance was done. Results: MEG-EEG measurements showed that compared to controls the opioid-dependent patients had delayed mismatch negativity (MMN) response to novel sounds in the EEG and P3am on the contralateral hemisphere to the stimulated ear in MEG. The equivalent current dipole (ECD) of N1m response was stronger in patients with benzodiazepine co-dependence than those without benzodiazepine co-dependence or controls. In early abstinence the opioid dependents performed poorer than the controls in tests measuring attention and working memory, executive function and fluid intelligence. Test results of the Culture Fair Intelligence Test (CFIT), testing fluid intelligence, and Paced Auditory Serial Addition Test (PASAT), measuring attention and working memory correlated positively with the days of abstinence. MRI measurements showed that the relative volume of CSF was significantly larger in opioid dependents, which could also be seen in visual analysis. Also Sylvian fissures, expressed by SFR were wider in patients, which correlated negatively with the age of opioid abuse onset. In controls the relative gray matter volume had a positive correlation with composite cognitive performance, but this correlation was not found in opioid dependents in early abstinence. Conclusions: Opioid dependents had wide Sylvian fissures and CSF spaces indicating frontotemporal atrophy. Dilatation of Sylvian fissures correlated with the abuse onset age. During early withdrawal cognitive performance of opioid dependents was impaired. While intoxicated the pre-attentive attention to novel stimulus was delayed and benzodiazepine co-dependence impaired sound detection. All these changes point to disturbances on frontotemporal areas.
Resumo:
Objectives: To evaluate the applicability of visual feedback posturography (VFP) for quantification of postural control, and to characterize the horizontal angular vestibulo-ocular reflex (AVOR) by use of a novel motorized head impulse test (MHIT). Methods: In VFP, subjects standing on a platform were instructed to move their center of gravity to symmetrically placed peripheral targets as fast and accurately as possible. The active postural control movements were measured in healthy subjects (n = 23), and in patients with vestibular schwannoma (VS) before surgery (n = 49), one month (n = 17), and three months (n = 36) after surgery. In MHIT we recorded head and eye position during motorized head impulses (mean velocity of 170º/s and acceleration of 1 550º/s²) in healthy subjects (n = 22), in patients with VS before surgery (n = 38) and about four months afterwards (n = 27). The gain, asymmetry and latency in MHIT were calculated. Results: The intraclass correlation coefficient for VFP parameters during repeated tests was significant (r = 0.78-0.96; p < 0.01), although two of four VFP parameters improved slightly during five test sessions in controls. At least one VFP parameter was abnormal pre- and postoperatively in almost half the patients, and these abnormal preoperative VFP results correlated significantly with abnormal postoperative results. The mean accuracy in postural control in patients was reduced pre- and postoperatively. A significant side difference with VFP was evident in 10% of patients. In the MHIT, the normal gain was close to unity, the asymmetry in gain was within 10%, and the latency was a mean ± standard deviation 3.4 ± 6.3 milliseconds. Ipsilateral gain or asymmetry in gain was preoperatively abnormal in 71% of patients, whereas it was abnormal in every patient after surgery. Preoperative gain (mean ± 95% confidence interval) was significantly lowered to 0.83 ± 0.08 on the ipsilateral side compared to 0.98 ± 0.06 on the contralateral side. The ipsilateral postoperative mean gain of 0.53 ± 0.05 was significantly different from preoperative gain. Conclusion: The VFP is a repeatable, quantitative method to assess active postural control within individual subjects. The mean postural control in patients with VS was disturbed before and after surgery, although not severely. Side difference in postural control in the VFP was rare. The horizontal AVOR results in healthy subjects and in patients with VS, measured with MHIT, were in agreement with published data achieved using other techniques with head impulse stimuli. The MHIT is a non-invasive method which allows reliable clinical assessment of the horizontal AVOR.
Resumo:
Women with a history of pre-eclampsia have an increased risk of cardiovascular disease in later life. The mechanisms which mediate this heightened risk are poorly understood; it was long believed that pre-eclampsia was a separate disease without any connection to other pathologies. The present study was undertaken to investigate the cardiovascular risk milieu, vascular dilatory function and cardiovascular risk factors, in women with pre-eclampsia, 5 6 years after index pregnancy. The aim was to understand better the cardiovascular risks associated with pre-eclampsia and add tools to the evaluation of cardiovascular risk in women. --- The study involved 30 women with previous severe pre-eclampsia and 21 controls. The 2-day study protocol included venous occlusion plethysmography and pulse wave analysis for assessment of vascular dilatory function and central pulse wave reflection, respectively, office and ambulatory blood pressure measurements, assessment of insulin sensitivity, using a minimal model technique, and tests regarding renal function, lipid metabolism, sympathetic activity and inflammation. Vasodilatory function was impaired in women with a history of pre-eclampsia; this was seen in both endothelium-dependent and endothelium-independent vasodilatation. Proteinuria during pre-eclampsia did not predict changes in vasodilatation, and renal function was similar in the two groups. Insulin sensitivity was related to vasodilatation and features of metabolic syndrome, but only in the patient group, despite similar insulin sensitivity in the control group. Arterial pressure was higher in the patient group than in the controls and correlated with endothelin-1 levels in the patient group, whilst the overall difference between the groups was diminished in 24 hour arterial pressure measurements. Additionally, women with previous pre-eclampsia were characterized by increased sympathetic activity. Impaired vasodilatory function at the vascular smooth muscle level seems to characterize clinically healthy women with a history of pre-eclampsia. These vascular changes and the features of metabolic syndrome may be related to the increased risk of cardiovascular disease. Furthermore, increased blood pressure in combination with enhanced sympathetic activity may be additive as regards this risk. These women should be informed about their potential cardiovascular risk profile and the possibilities to minimize it via their own actions. Medical cardiovascular risk assessment in women should include obstetric history.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
Resumo:
Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.