987 resultados para hypotheses


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objetiva compreender como se dá, com foco na separação e independência dos Poderes, a relação entre Executivo e Legislativo no Brasil após a promulgação da Constituição de 1988. Sob a perspectiva da Câmara dos Deputados, verificou-se, com base na observação dos dispositivos constitucionais e regimentais, que a organização interna do Congresso Nacional centraliza as decisões nos líderes dos partidos, e que para que as iniciativas presidenciais logrem êxito, o Presidente de República é obrigado a formar coalizões com os parlamentares, que são mantidas com base na negociação por cargos e com a liberação de emendas orçamentárias. Com base nas hipóteses levantadas a partir de teorias (em especial os da temática da política pública, da conexão eleitoral e dos ciclos eleitorais), constatou-se que em alguns casos os interesses dos parlamentares prevalecem, apesar da grande coalizão presidencial. A metodologia utilizada foi a análise das medidas provisórias rejeitadas no período compreendido entre 2001 a 2010. Ao final, concluiu-se que o poder decisório do Presidente da República não é inerente de suas prerrogativas legislativas, mas devido ao consentimento dos parlamentares, especialmente dos líderes partidários. É uma situação que, embora restrinja a atuação individual dos parlamentares, fortalece os partidos políticos e seus líderes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past four decades, the state of Hawaii has developed a system of eleven Marine Life Conservation Districts (MLCDs) to conserve and replenish marine resources around the state. Initially established to provide opportunities for public interaction with the marine environment, these MLCDs vary in size, habitat quality, and management regimes, providing an excellent opportunity to test hypotheses concerning marine protected area (MPA) design and function using multiple discreet sampling units. NOAA/NOS/NCCOS/Center for Coastal Monitoring and Assessment’s Biogeography Team developed digital benthic habitat maps for all MLCD and adjacent habitats. These maps were used to evaluate the efficacy of existing MLCDs for biodiversity conservation and fisheries replenishment, using a spatially explicit stratified random sampling design. Coupling the distribution of habitats and species habitat affinities using GIS technology elucidates species habitat utilization patterns at scales that are commensurate with ecosystem processes and is useful in defining essential fish habitat and biologically relevant boundaries for MPAs. Analysis of benthic cover validated the a priori classification of habitat types and provided justification for using these habitat strata to conduct stratified random sampling and analyses of fish habitat utilization patterns. Results showed that the abundance and distribution of species and assemblages exhibited strong correlations with habitat types. Fish assemblages in the colonized and uncolonized hardbottom habitats were found to be most similar among all of the habitat types. Much of the macroalgae habitat sampled was macroalgae growing on hard substrate, and as a result showed similarities with the other hardbottom assemblages. The fish assemblages in the sand habitats were highly variable but distinct from the other habitat types. Management regime also played an important role in the abundance and distribution of fish assemblages. MLCDs had higher values for most fish assemblage characteristics (e.g. biomass, size, diversity) compared with adjacent fished areas and Fisheries Management Areas (FMAs) across all habitat types. In addition, apex predators and other targeted resources species were more abundant and larger in the MLCDs, illustrating the effectiveness of these closures in conserving fish populations. Habitat complexity, quality, size and level of protection from fishing were important determinates of MLCD effectiveness with respect to their associated fish assemblages. (PDF contains 217 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary: Tropical marine ecosystems in the Caribbean region are inextricably linked through the movement of pollutants, nutrients, diseases, and other stressors, which threaten to further degrade coral reef communities. The magnitude of change that is occurring within the region is considerable, and solutions will require investigating pros and cons of networks of marine protected areas (MPAs), cooperation of neighboring countries, improved understanding of how external stressors degrade local marine resources, and ameliorating those stressors. Connectivity can be broadly defined as the exchange of materials (e.g., nutrients and pollutants), organisms, and genes and can be divided into: 1) genetic or evolutionary connectivity that concerns the exchange of organisms and genes, 2) demographic connectivity, which is the exchange of individuals among local groups, and 3) oceanographic connectivity, which includes flow of materials and circulation patterns and variability that underpin much of all these exchanges. Presently, we understand little about connectivity at specific locations beyond model outputs, and yet we must manage MPAs with connectivity in mind. A key to successful MPA management is how to most effectively work with scientists to acquire the information managers need. Oceanography connectivity is poorly understood, and even less is known about the shape of the dispersal curve for most species. Dispersal kernels differ for various systems, species, and life histories and are likely highly variable in space and time. Furthermore, the implications of different dispersal kernels on population dynamics and management of species is unknown. However, small dispersal kernels are the norm - not the exception. Linking patterns of dispersal to management options is difficult given the present state of knowledge. The behavioral component of larval dispersal has a major impact on where larvae settle. Individual larval behavior and life history details are required to produce meaningful simulations of population connectivity. Biological inputs are critical determinants of dispersal outcomes beyond what can be gleaned from models of passive dispersal. There is considerable temporal and spatial variation to connectivity patterns. New models are increasingly being developed, but these must be validated to understand upstream-downstream neighborhoods, dispersal corridors, stepping stones, and source/sink dynamics. At present, models are mainly useful for providing generalities and generating hypotheses. Low-technology approaches such as drifter vials and oceanographic drogues are useful, affordable options for understanding local connectivity. The “silver bullet” approach to MPA design may not be possible for several reasons. Genetic connectivity studies reveal divergent population genetic structures despite similar larval life histories. Historical stochasticity in reproduction and/or recruitment likely has important, longlasting consequences on present day genetic structure. (PDF has 200 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Though the stocks of North Sea herring seemed to have recovered from small numbers since the mid-1990s we do recently observe a new decline in the spawning stock biomass. This is mainly caused by four consecutive years of small reproduction. Whilst the adults produce enough eggs and larvae only few survive until mature stages. The reasons for the bad recruitment are not clear. In this paper we investigate the influence of climate conditions, in particular the North Atlantic Oscillation (NAO) that obviously triggers the interaction between the size of the spawning stock and the abundance of larvae. We show that approximately 60 % of the recruitment variance can be explained by specific constellations of spawning stock size and climatic conditions. Beside physical factors we also discuss several working hypotheses shedding light on the influence of biological variables on the fluctuation of herring offspring.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: All available longline data on skipjack captured in the Pacific Ocean by Japanese research vessels (1949-1965) and from incidental skipjack catches by Japanese commercial vessels (1956-1964) were analyzed. As skipjack are not specifically sought by longline vessels, the data are limited. Considering this it was found that: longline gear captures skipjack of wider size-range and is more selective for larger skipjack than conventional fishing methods, i.e. pole-and-line and purse-seine; skipjack are widely and almost continuously distributed across the Pacific; throughout the year average hook-rates are greater in the southeastern Pacific than in the northwestern Pacific; areas of high hook-rate shift south during the second and third quarters and north during the first and fourth quarters; in the western Pacific the north-south range of the catch distribution was greatest in the first and fourth quarters; skipjack hook-rates are relatively high in the northwestern Pacific east of Japan only during the first and fourth quarters; the highest hook-rates were recorded in extensive areas along the equator (from lO°N to 20°8 between approximately 155°W-100°W); generally more skipjack were captured by research longline gear in water temperature ranges approaching both the upper and lower temperature limits of skipjack distribution (18-21C and 26-28C), than is the case in surface skipjack fisheries; tentative comparisons of longline skipjack catch distributions with Pacific current systems, suggests low skipjack abundance in both North Pacific Central and North Pacific Equatorial water; the sex ratio was 95 males : 63 females in a small sample of skipjack examined; longlines capture skipjack of three, and possibly more, age groups; in skipjack size-composition samples studied, the smaller modal group (65 cm) observed in January-March in the northwestern Pacific (1600E-180oE and 20oN-45°N) corresponds in size to the larger modal group appearing in the late-summer surface fishery off the Izu-Bonin Islands southeast of Japan, and also compares in modal size to the skipjack taken in the Hawaiian fishery in spring time; the analysis of skipjack catches by hook position on the longline and by death-rate studies, indicates that part of the catch is made while the gear is in motion near the surface, and a lesser part of the catch is made when the gear is stabilized at a depth of 70 to 140 m. A brief discussion is given, in the light of new information presented, on several hypotheses by other authors concerning the population structure and migration of skipjack in the Pacific Ocean. SPANISH: Se analizaron todos los datos disponibles de la pesca con palangre de barriletes capturados en el Océano Pacífico por barcos japoneses de investigación (1949-1965) y por las capturas incidentales de los barcos comerciales japoneses (1956-1964). Como los barcos palangreros específicamente, no persiguen al barrilete, los datos son limitados. Considerando ésto, se encontró: que el arte palangrero obtiene barriletes con una distribución más amplia de tallas, y es más selectivo en cuanto a los barriletes de mayor talla, que los métodos convencionales de pesca, Le. cañas de pescar y redes de cerco; el barrilete se encuentra amplia y casi continuamente distribuido a través del Pacífico; en todo el año, las tasas promedio de captura por anzuelo son superiores en el Pacífico sudoriental que las del Pacífico noroeste; las áreas con una tasa alta de captura por anzuelo, se cambian hacia el sur durante los trimestres segundo y tercero, y durante los trimestres primero y cuarto hacia el norte; en el Pacífico occidental la amplitud de la distribución de captura norte-sur, fue superior en los trimestres primero y cuarto; las tasas de captura por anzuelo de barrilete, son relativamente altas en el Pacífico noroeste al este del Japón, únicamente durante los trimestres primero y cuarto; las tasas de captura por anzuelo más altas fueron registradas en extensas áreas a lo largo del ecuador (desde los 10°N hasta los 20°S, aproximadamente entre los 155°W-100°W) ; generalmente las artes palangreras de investigación capturaron más barrilete en aguas en las que la temperatura se aproximaba a los límites más altos o bajos de la temperatura en la distribución del barrilete (18-21 C y 26-28 C), que en el caso de la pesca superficial de barrilete; las comparaciones tentativas de la captura de barrilete con palangre, con el sistema de las corrientes del Pacífico, sugieren una abundancia inferior de barrilete tanto en las aguas del Pacífico central del norte como en las del Pacífico ecuatorial del norte; la proporcíon sexual examinada en una pequeña muestra de barriletes, fue de 95 machos y 63 hembras; los palangreros capturan barriletes de tres grupos de edad y posiblemente de más; en las muestras estudiadas de la composición de las tallas de barrilete, el grupo modal más pequeño (65 cm), observado en enero-marzo en el Pacífico noroeste (160 0E-180° y 20 oN-45°N), corresponde en talla al grupo modal más grande que aparece en la pesca de superficie a fines del verano frente a las Islas Izu-Bonín al sudeste del Japón, y se compara también con la talla modal del barrilete obtenido en la pesca hawaiana en la época de primavera; el análisis de las capturas de barrilete por medio del estudio de la posición de los anzuelos en el palangre y por la tasa de mortalidad, indica que parte de la captura se efectúa cuando el equipo está en movimiento cerca a la superficie y una parte inferior de la captura se realiza, cuando las artes se estabilizan a una profundidad de 70 a 140 m. Se ofrece una breve discusión sobre varias hipótesis de otros autores, en vista de la nueva información presentada referente a la estructura poblacional y a la migración del barrilete en el Océano Pacífico. (PDF contains 100 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Space-time correlations or Eulerian two-point two-time correlations of fluctuating velocities are analytically and numerically investigated in turbulent shear flows. An elliptic model for the space-time correlations in the inertial range is developed from the similarity assumptions on the isocorrelation contours: they share a uniform preference direction and a constant aspect ratio. The similarity assumptions are justified using the Kolmogorov similarity hypotheses and verified using the direct numerical simulation DNS of turbulent channel flows. The model relates the space-time correlations to the space correlations via the convection and sweeping characteristic velocities. The analytical expressions for the convection and sweeping velocities are derived from the Navier-Stokes equations for homogeneous turbulent shear flows, where the convection velocity is represented by the mean velocity and the sweeping velocity is the sum of the random sweeping velocity and the shearinduced velocity. This suggests that unlike Taylor’s model where the convection velocity is dominating and Kraichnan and Tennekes’ model where the random sweeping velocity is dominating, the decorrelation time scales of the space-time correlations in turbulent shear flows are determined by the convection velocity, the random sweeping velocity, and the shear-induced velocity. This model predicts a universal form of the spacetime correlations with the two characteristic velocities. The DNS of turbulent channel flows supports the prediction: the correlation functions exhibit a fair good collapse, when plotted against the normalized space and time separations defined by the elliptic model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I of the thesis describes the olfactory searching and scanning behaviors of rats in a wind tunnel, and a detailed movement analysis of terrestrial arthropod olfactory scanning behavior. Olfactory scanning behaviors in rats may be a behavioral correlate to hippocampal place cell activity.

Part II focuses on the organization of olfactory perception, what it suggests about a natural order for chemicals in the environment, and what this in tum suggests about the organization of the olfactory system. A model of odor quality space (analogous to the "color wheel") is presented. This model defines relationships between odor qualities perceived by human subjects based on a quantitative similarity measure. Compounds containing Carbon, Nitrogen, or Sulfur elicit odors that are contiguous in this odor representation, which thus allows one to predict the broad class of odor qualities a compound is likely to elicit. Based on these findings, a natural organization for olfactory stimuli is hypothesized: the order provided by the metabolic process. This hypothesis is tested by comparing compounds that are structurally similar, perceptually similar, and metabolically similar in a psychophysical cross-adaptation paradigm. Metabolically similar compounds consistently evoked shifts in odor quality and intensity under cross-adaptation, while compounds that were structurally similar or perceptually similar did not. This suggests that the olfactory system may process metabolically similar compounds using the same neural pathways, and that metabolic similarity may be the fundamental metric about which olfactory processing is organized. In other words, the olfactory system may be organized around a biological basis.

The idea of a biological basis for olfactory perception represents a shift in how olfaction is understood. The biological view has predictive power while the current chemical view does not, and the biological view provides explanations for some of the most basic questions in olfaction, that are unanswered in the chemical view. Existing data do not disprove a biological view, and are consistent with basic hypotheses that arise from this viewpoint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of the Puget Sound Nearshore Ecosystem Restoration Project (PSNERP) is to improve system-wide functionality of nearshorei ecosystem processes. To achieve that goal, PSNERP plans to strategically restore nearshore sites throughout Puget Sound. PSNERP scientists are assessing changes to the nearshore, and will recommend an environmentally strategic restoration portfolio. Yet, PSNERP also needs stakeholder input to design a socially strategic portfolio. This research investigates the values and preferences of stakeholders in the Whidbey Sub-Basin of Puget Sound to help PSNERP be both socially and environmentally strategic. This investigation may be repeated in the six other Puget Sound Sub-Basins. The results will guide restoration portfolio design and future stakeholder involvement activities. This study examines four areas of stakeholder values and preferences: 1) beliefs about the causes, solutions, and severity of nearshore problems; 2) priorities for nearshore features, shoreforms, developments, and restoration objectives; 3) thoughts about ecosystem servicesiii and trade-offs among them; and 4) visions of a future, restored Puget Sound nearshore and the role of science in attaining this vision. The study is framed by two hypotheses from the Advocacy Coalition Framework (ACF), which suggests that groups of policy advocates form around shared “policy core beliefs” which can transcend traditional categories of stakeholders.(PDF contains 3 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deference to committees in Congress has been a much studied phenomena for close to 100 years. This deference can be characterized as the unwillingness of a potentially winning coalition on the House floor to impose its will on a small minority, a standing committee. The congressional scholar is then faced with two problems: observing such deference to committees, and explaining it. Shepsle and Weingast have proposed the existence of an ex-post veto for standing committees as an explanation of committee deference. They claim that as conference reports in the House and Senate are considered under a rule that does not allow amendments, the conferees enjoy agenda-setting power. In this paper I describe a test of such a hypothesis (along with competing hypotheses regarding the effects of the conference procedure). A random-utility model is utilized to estimate legislators' ideal points on appropriations bills from 1973 through 1980. I prove two things: 1) that committee deference can not be said to be a result of the conference procedure; and moreover 2) that committee deference does not appear to exist at all.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flies are particularly adept at balancing the competing demands of delay tolerance, performance, and robustness during flight, which invites thoughtful examination of their multimodal feedback architecture. This dissertation examines stabilization requirements for inner-loop feedback strategies in the flapping flight of Drosophila, the fruit fly, against the backdrop of sensorimotor transformations present in the animal. Flies have evolved multiple specializations to reduce sensorimotor latency, but sensory delay during flight is still significant on the timescale of body dynamics. I explored the effect of sensor delay on flight stability and performance for yaw turns using a dynamically-scaled robot equipped with a real-time feedback system that performed active turns in response to measured yaw torque. The results show a fundamental tradeoff between sensor delay and permissible feedback gain, and suggest that fast mechanosensory feedback provides a source of active damping that compliments that contributed by passive effects. Presented in the context of these findings, a control architecture whereby a haltere-mediated inner-loop proportional controller provides damping for slower visually-mediated feedback is consistent with tethered-flight measurements, free-flight observations, and engineering design principles. Additionally, I investigated how flies adjust stroke features to regulate and stabilize level forward flight. The results suggest that few changes to hovering kinematics are actually required to meet steady-state lift and thrust requirements at different flight speeds, and the primary driver of equilibrium velocity is the aerodynamic pitch moment. This finding is consistent with prior hypotheses and observations regarding the relationship between body pitch and flight speed in fruit flies. The results also show that the dynamics may be stabilized with additional pitch damping, but the magnitude of required damping increases with flight speed. I posit that differences in stroke deviation between the upstroke and downstroke might play a critical role in this stabilization. Fast mechanosensory feedback of the pitch rate could enable active damping, which would inherently exhibit gain scheduling with flight speed if pitch torque is regulated by adjusting stroke deviation. Such a control scheme would provide an elegant solution for flight stabilization across a wide range of flight speeds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objetivo da dissertação é abordar a teoria causal do mental de Freud e seu uso na explicação do conflito psíquico. É destacada a dupla dimensão da causalidade psíquica freudiana que oscila de uma causalidade mecanicista a uma causalidade intencional. Considera-se que a forma mais coerente de defender a tese da causalidade psíquica é descrevê-la como intencional, usando o vocabulário psicológico. Para dar suporte a essa idéia, a concepção estrita da causalidade sustentada por Wittgenstein é dispensada, e a idéia de causa mental em Davidson é endossada como aquela que pode ser mais facilmente articulável às hipóteses freudianas. A questão é analisada em três momentos da obra freudiana. Na primeira tópica, Freud usa um vocabulário híbrido, descrevendo o psiquismo tanto em termos de causa racionais quanto em termos de causas intencionais. Na segunda tópica, o psiquismo assume, cada vez mais, uma descrição intencional, e a causa a-racional da energia pulsional, inicialmente apresentada como um motor do psiquismo, dá um lugar a angústia como um afeto intencional que obriga o eu a encontrar uma solução para os conflitos psíquicos.