991 resultados para reference-dependent preferences


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este libro resulta de una lectura crítica de la historia y la evolución que han tenido importantes discusiones teoricas acerca del liberalismo y la acción colectiva. Se destacan algunas relaciones entre los aportes de diversos autores de la economía, la sociología, la filosofía, la ciencia política y la psicología. En especial se ofrece una presentación analítica de algunas de las relaciones más relevantes entre las libertades individuales y las oportunidades factibles en procesos de escogencia y de acción individual y colectiva. La propia cosecha del autor permite seguir tres hallazgos, los cuales pueden generar nuevas perspectivas para investigadores interesados en estos temas, estos son: una propuesta conceptual sobre las características y los requerimientos de la libertad individual.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho propõe maneiras alternativas para a estimação consistente de uma medida abstrata, crucial para o estudo de decisões intertemporais, o qual é central a grande parte dos estudos em macroeconomia e finanças: o Fator Estocástico de Descontos (SDF, sigla em Inglês). Pelo emprego da Equação de Apreçamento constrói-se um inédito estimador consistente do SDF que depende do fato de que seu logaritmo é comum a todos os ativos de uma economia. O estimador resultante é muito simples de se calcular, não depende de fortes hipóteses econômicas, é adequado ao teste de diversas especificações de preferência e para a investigação de paradoxos de substituição intertemporal, e pode ser usado como base para a construção de um estimador para a taxa livre de risco. Alternativas para a estratégia de identificação são aplicadas e um paralelo entre elas e estratégias de outras metodologias é traçado. Adicionando estrutura ao ambiente inicial, são apresentadas duas situações onde a distribuição assintótica pode ser derivada. Finalmente, as metodologias propostas são aplicadas a conjuntos de dados dos EUA e do Brasil. Especificações de preferência usualmente empregadas na literatura, bem como uma classe de preferências dependentes do estado, são testadas. Os resultados são particularmente interessantes para a economia americana. A aplicação de teste formais não rejeita especificações de preferências comuns na literatura e estimativas para o coeficiente relativo de aversão ao risco se encontram entre 1 e 2, e são estatisticamente indistinguíveis de 1. Adicionalmente, para a classe de preferência s dependentes do estado, trajetórias altamente dinâmicas são estimadas para a tal coeficiente, as trajetórias são confinadas ao intervalo [1,15, 2,05] e se rejeita a hipótese de uma trajetória constante.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Meditation is a self-induced and willfully initiated practice that alters the state of consciousness. The meditation practice of Zazen, like many other meditation practices, aims at disregarding intrusive thoughts while controlling body posture. It is an open monitoring meditation characterized by detached moment-to-moment awareness and reduced conceptual thinking and self-reference. Which brain areas differ in electric activity during Zazen compared to task-free resting? Since scalp electroencephalography (EEG) waveforms are reference-dependent, conclusions about the localization of active brain areas are ambiguous. Computing intracerebral source models from the scalp EEG data solves this problem. In the present study, we applied source modeling using low resolution brain electromagnetic tomography (LORETA) to 58-channel scalp EEG data recorded from 15 experienced Zen meditators during Zazen and no-task resting. Zazen compared to no-task resting showed increased alpha-1 and alpha-2 frequency activity in an exclusively right-lateralized cluster extending from prefrontal areas including the insula to parts of the somatosensory and motor cortices and temporal areas. Zazen also showed decreased alpha and beta-2 activity in the left angular gyrus and decreased beta-1 and beta-2 activity in a large bilateral posterior cluster comprising the visual cortex, the posterior cingulate cortex and the parietal cortex. The results include parts of the default mode network and suggest enhanced automatic memory and emotion processing, reduced conceptual thinking and self-reference on a less judgmental, i.e., more detached moment-to-moment basis during Zazen compared to no-task resting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – This article aims to investigate whether intermediaries reduce loss aversion in the context of a high-involvement non-frequently purchased hedonic product (tourism packages). Design/methodology/approach – The study incorporates the reference-dependent model into a multinomial logit model with random parameters, which controls for heterogeneity and allows representation of different correlation patterns between non-independent alternatives. Findings – Differentiated loss aversion is found: consumers buying high-involvement non-frequently purchased hedonic products are less loss averse when using an intermediary than when dealing with each provider separately and booking their services independently. This result can be taken as identifying consumer-based added value provided by the intermediaries. Practical implications – Knowing the effect of an increase in their prices is crucial for tourism collective brands (e.g. “sun and sea”, “inland”, “green destinations”, “World Heritage destinations”). This is especially applicable nowadays on account of the fact that many destinations have lowered prices to attract tourists (although, in the future, they will have to put prices back up to their normal levels). The negative effect of raising prices can be absorbed more easily via indirect channels when compared to individual providers, as the influence of loss aversion is lower for the former than the latter. The key implication is that intermediaries can – and should – add value in competition with direct e-tailing. Originality/value – Research on loss aversion in retailing has been prolific, exclusively focused on low-involvement and frequently purchased products without distinguishing the direct or indirect character of the distribution channel. However, less is known about other types of products such as high-involvement non-frequently purchased hedonic products. This article focuses on the latter and analyzes different patterns of loss aversion in direct and indirect channels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The visuo-spatial abilities of individuals with Williams syndrome (WS) have consistently been shown to be generally weak. These poor visuo-spatial abilities have been ascribed to a local processing bias by some [R. Rossen, E.S. Klima, U. Bellugi, A. Bihrle, W. Jones, Interaction between language and cognition: evidence from Williams syndrome, in: J. Beitchman, N. Cohen, M. Konstantareas, R. Tannock (Eds.), Language, Learning and Behaviour disorders: Developmental, Behavioural and Clinical Perspectives, Cambridge University Press, New York, 1996, pp. 367-392] and conversely, to a global processing bias by others [Psychol. Sci. 10 (1999) 453]. In this study, two identification versions and one drawing version of the Navon hierarchical processing task, a non-verbal task, were employed to investigate this apparent contradiction. The two identification tasks were administered to 21 individuals with WS, 21 typically developing individuals, matched by non-verbal ability, and 21 adult participants matched to the WS group by mean chronological age (CA). The third, drawing task was administered to the WS group and the typically developing (TD) controls only. It was hypothesised that the WS group would show differential processing biases depending on the type of processing the task was measuring. Results from two identification versions of the Navon task measuring divided and selective attention showed that the WS group experienced equal interference from global to local as from local to global levels, and did not show an advantage of one level over another. This pattern of performance was broadly comparable to that of the control groups. The third task, a drawing version of the Navon task, revealed that individuals with WS were significantly better at drawing the local form in comparison to the global figure, whereas the typically developing control group did not show a bias towards either level. In summary, this study demonstrates that individuals with WS do not have a local or a global processing bias when asked to identify stimuli, but do show a local bias in their drawing abilities. This contrast may explain the apparently contrasting findings from previous studies. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis, we evaluate consumer purchase behaviour from the perspective of heuristic decision making. Heuristic decision processes are quick and easy mental shortcuts, adopted by individuals to reduce the amount of time spent in decision making. In particular, we examine those heuristics which are caused by framing – prospect theory and mental accounting, and examine these within price related decision scenarios. The impact of price framing on consumer behaviour has been studied under the broad umbrella of reference price, which suggests that decision makers use reference points as standards of comparison when making a purchase decision. We investigate four reference points - a retailer's past prices, a competitor's current prices, a competitor's past prices, and consumers' expectation of immediate future price changes, to further our understanding of the impact of price framing on mental accounting, and in turn, contribute to the growing body of reference price literature in Marketing research. We carry out experiments in which levels of price frame and monetary outcomes are manipulated in repeated measures analysis of variance (ANOVA). Our results show that where these reference points are clearly specified in decision problems, price framing significantly affects consumers' perceptions of monetary gains derived through discounts, and leads to reversals in consumer preferences. We also found that monetary losses were not sensitive to price frame manipulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies life-cycle preferences over consumption and health status. We show that cost-effectiveness analysis is consistent with cost-benefit analysis if the Lifetime utility function is additive over time, multiplicative in the utility of consumption and the utility of health status, and if the utility of consumption is constant over time. We derive the conditions under which the lifetime utility function takes this form, both under expected utility theory and under rank-dependent utility theory, which is currently the most important nonexpected utility theory. If cost-effectiveness analysis is consistent with cost-benefit analysis, it is possible to derive tractable expressions for the willingness to pay for quality-adjusted life-years (QALYs). The willingness to pay for QALYs depends on wealth, remaining life expectancy, health status, and the possibilities for intertemporal substitution of consumption. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: A high smoking prevalence has been registered among alcoholics. It has been pointed out that alcoholic smokers may have a more severe course and greater severity of alcoholism. This study aims at comparing smoking and non-smoking alcoholics in terms of treatment outcomes and verifying the efficacy of topiramate and naltrexone to decrease the use of cigarettes among alcoholic smokers. Methods: The investigation was a double-blind, placebo-controlled, 12-week study carried out at the University of Sao Paulo, Brazil. The sample comprised 155 male alcohol-dependent outpatients (52 nonsmokers and 103 smokers). 18-60 years of age, with an International Classification of Diseases (ICD-10) diagnosis of alcohol dependence. After a 1-week detoxification period, the patients randomly received placebo, naltrexone (50 mg/day) or topiramate (up to 300 mg/day). Only the alcoholic smokers who adhered to the treatment were evaluated with reference to the smoking reduction. Results: Cox regression analysis revealed that the smoking status among alcoholics increased the odds of relapse into drinking by 65%, independently of the medications prescribed, using the intention-to-treat method. Topiramate showed effectiveness to reduce the number of cigarettes smoked when compared to placebo among adherent patients (mean difference =7.91, p < 0.01). There were no significant differences between the naltrexone group and the placebo group. Conclusions: The results of this study confirm that the treatment is more challenging for smoking alcoholics than for non-smoking ones and support the efficacy of topiramate in the smoking reduction among male alcoholic smokers who adhered to the treatment. (C) 2009 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large (>1600 mum), ingestively masticated particles of bermuda grass (Cynodon dactylon L. Pers.) leaf and stem labelled with Yb-169 and Ce-144 respectively were inserted into the rumen digesta raft of heifers grazing bermuda grass. The concentration of markers in digesta sampled from the raft and ventral rumen were monitored at regular intervals over approximately 144 h. The data from the two sampling sites were simultaneously fitted to two pool (raft and ventral rumen-reticulum) models with either reversible or sequential flow between the two pools. The sequential flow model fitted the data equally as well as the reversible flow model but the reversible flow model was used because of its greater application. The reversible flow model, hereafter called the raft model, had the following features: a relatively slow age-dependent transfer rate from the raft (means for a gamma 2 distributed rate parameter for leaf 0.0740 v. stem 0.0478 h(-1)), a very slow first order reversible flow from the ventral rumen to the raft (mean for leaf and stem 0.010 h(-1)) and a very rapid first order exit from the ventral rumen (mean of leaf and stem 0.44 h(-1)). The raft was calculated to occupy approximately 0.82 total rumen DM of the raft and ventral rumen pools. Fitting a sequential two pool model or a single exponential model individually to values from each of the two sampling sites yielded similar parameter values for both sites and faster rate parameters for leaf as compared with stem, in agreement with the raft model. These results were interpreted as indicating that the raft forms a large relatively inert pool within the rumen. Particles generated within the raft have difficulty escaping but once into the ventral rumen pool they escape quickly with a low probability of return to the raft. It was concluded that the raft model gave a good interpretation of the data and emphasized escape from and movement within the raft as important components of the residence time of leaf and stem particles within the rumen digesta of cattle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Respiration is altered during different stages of the sleep-wake cycle. We review the contribution of cholinergic systems to this alteration, with particular reference to the role of muscarinic acetylcholine receptors (MAchRs) during rapid eye movement (REM) sleep. Available evidence demonstrates that MAchRs have potent excitatory effects on medullary respiratory neurones and respiratory motoneurones, and are likely to contribute to changes in central chemosensitive drive to the respiratory control system. These effects are likely to be most prominent during REM sleep, when cholinergic brainstem neurones show peak activity levels. It is possible that MAchR dysfunction is involved in sleep-disordered breathing, Such as obstructive sleep apnea. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the growing complexity and dynamism of many embedded application domains (including consumer electronics, robotics, automotive and telecommunications), it is increasingly difficult to react to load variations and adapt the system's performance in a controlled fashion within an useful and bounded time. This is particularly noticeable when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may exhibit unrestricted QoS inter-dependencies. This paper proposes a novel anytime adaptive QoS control policy in which the online search for the best set of QoS levels is combined with each user's personal preferences on their services' adaptation behaviour. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. We investigated experimentally predation by the flatworm Dugesia lugubris on the snail Physa acuta in relation to predator body length and to prey morphology [shell length (SL) and aperture width (AW)]. 2. SL and AW correlate strongly in the field, but display significant and independent variance among populations. In the laboratory, predation by Dugesia resulted in large and significant selection differentials on both SL and AW. Analysis of partial effects suggests that selection on AW was indirect, and mediated through its strong correlation with SL. 3. The probability P(ij) for a snail of size category i (SL) to be preyed upon by a flatworm of size category j was fitted with a Poisson-probability distribution, the mean of which increased linearly with predator size (i). Despite the low number of parameters, the fit was excellent (r2 = 0.96). We offer brief biological interpretations of this relationship with reference to optimal foraging theory. 4. The largest size class of Dugesia (>2 cm) did not prey on snails larger than 7 mm shell length. This size threshold might offer Physa a refuge against flatworm predation and thereby allow coexistence in the field. 5. Our results are further discussed with respect to previous field and laboratory observations on P acuta life-history patterns, in particular its phenotypic variance in adult body size.