959 resultados para Defeasible conditional


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published as an article in: Studies in Nonlinear Dynamics & Econometrics, 2004, vol. 8, issue 1, pages 5.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers a time varying parameter extension of the Ruge-Murcia (2003, 2004) model to explore whether some of the variation in parameter estimates seen in the literature could arise from this source. A time varying value for the unemployment volatility parameter can be motivated through several means including variation in the slope of the Phillips curve or variation in the preferences of the monetary authority.We show that allowing time variation for the coefficient on the unemployment volatility parameter improves the model fit and it helps to provide an explanation of inflation bias based on asymmetric central banker preferences, which is consistent across subsamples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper investigates whether the growing GDP share of the services sector can contribute to explain the great moderation in the US. We identify and analyze three oil price shocks and use a SVAR analysis to measure their economic impact on the US economy at both the aggregate and the sectoral level. We find mixed support for the explanation of the great moderation in terms of shrinking oil shock volatilities and observe that increases (decreases) in oil shock volatilities are contrasted by a weakening (strengthening) in their transmission mechanism. Across sectors, services are the least affected by any oil shock. As the contribution of services to the GDP volatility increases over time, we conclude that a composition effect contributed to moderate the conditional volatility to oil shocks of the US GDP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differently from previous studies of tag-based cooperation, we assume that individuals fail to recognize their own tag. Due to such incomplete information, the action taken against the opponent cannot be based on similarity, although it is still motivated by the tag displayed by the opponent. We present stability conditions for the case when individuals play unconditional cooperation, unconditional defection or conditional cooperation. We then consider the removal of one or two strategies. Results show that conditional cooperators are the most resilient agents against extinction and that the removal of unconditional cooperators may lead to the extinction of unconditional defectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using a model of an optimizing monetary authority which has preferences that weigh inflation and unemployment, Ruge-Murcia (2003, 2004) finds empirical evidence that the authority has asymmetric preferences for unemployment. We extend this model to weigh inflation and output and show that the empirical evidence using these series also supports an asymmetric preference hypothesis, only in our case, preferences are asymmetric for output. We also find evidence that the monetary authority targets potential output rather than some higher output level as would be the case in an extended Barro and Gordon (1983) model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper models the mean and volatility spillovers of prices within the integrated Iberian and the interconnected Spanish and French electricity markets. Using the constant (CCC) and dynamic conditional correlation (DCC) bivariate models with three different specifications of the univariate variance processes, we study the extent to which increasing interconnection and harmonization in regulation have favoured price convergence. The data consist of daily prices calculated as the arithmetic mean of the hourly prices over a span from July 1st 2007 until February 29th 2012. The DCC model in which the variances of the univariate processes are specified with a VARMA(1,1) fits the data best for the integrated MIBEL whereas a CCC model with a GARCH(1,1) specification for the univariate variance processes is selected to model the price series in Spain and France. Results show that there are significant mean and volatility spillovers in the MIBEL, indicating strong interdependence between the two markets, while there is a weaker evidence of integration between the Spanish and French markets. We provide new evidence that the EU target of achieving a single electricity market largely depends on increasing trade between countries and homogeneous rules of market functioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Type-1 cannabinoid receptors (CB1R) are enriched in the hypothalamus, particularly in the ventromedial hypothalamic nucleus (VMH) that participates in homeostatic and behavioral functions including food intake. Although CB1R activation modulates excitatory and inhibitory synaptic transmission in the brain, CB1R contribution to the molecular architecture of the excitatory and inhibitory synaptic terminals in the VMH is not known. Therefore, the aim of this study was to investigate the precise subcellular distribution of CB1R in the VMH to better understand the modulation exerted by the endocannabinoid system on the complex brain circuitries converging into this nucleus. Methodology/Principal Findings: Light and electron microscopy techniques were used to analyze CB1R distribution in the VMH of CB1R-WT, CB1R-KO and conditional mutant mice bearing a selective deletion of CB1R in cortical glutamatergic (Glu-CB1R-KO) or GABAergic neurons (GABA-CB1R-KO). At light microscopy, CB1R immunolabeling was observed in the VMH of CB1R-WT and Glu-CB1R-KO animals, being remarkably reduced in GABA-CB1R-KO mice. In the electron microscope, CB1R appeared in membranes of both glutamatergic and GABAergic terminals/preterminals. There was no significant difference in the percentage of CB1R immunopositive profiles and CB1R density in terminals making asymmetric or symmetric synapses in CB1R-WT mice. Furthermore, the proportion of CB1R immunopositive terminals/preterminals in CB1R-WT and Glu-CB1R-KO mice was reduced in GABA-CB1R-KO mutants. CB1R density was similar in all animal conditions. Finally, the percentage of CB1R labeled boutons making asymmetric synapses slightly decreased in Glu-CB1R-KO mutants relative to CB1R-WT mice, indicating that CB1R was distributed in cortical and subcortical excitatory synaptic terminals. Conclusions/Significance: Our anatomical results support the idea that the VMH is a relevant hub candidate in the endocannabinoid-mediated modulation of the excitatory and inhibitory neurotransmission of cortical and subcortical pathways regulating essential hypothalamic functions for the individual's survival such as the feeding behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On several classes of n-person NTU games that have at least one Shapley NTU value, Aumann characterized this solution by six axioms: Non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives (IIA). Each of the first five axioms is logically independent of the remaining axioms, and the logical independence of IIA is an open problem. We show that for n = 2 the first five axioms already characterize the Shapley NTU value, provided that the class of games is not further restricted. Moreover, we present an example of a solution that satisfies the first five axioms and violates IIA for two-person NTU games (N, V) with uniformly p-smooth V(N).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an experimentally feasible scheme to generate various types of entangled states of light fields by using beam splitters and single-photon detectors. Two beams of light fields are incident on two beam splitters respectively with each beam being asymmetrically split into two parts in which one part is supposed to be so weak that it contains at most one photon. We let the two weak output modes interfere at a third beam splitter. A conditional joint measurement on both weak output modes may result in an entanglement between the other two output modes. The conditions for the maximal entanglement are discussed based on the concurrence. Several specific examples are also examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some of the most exciting developments in the field of nucleic acid engineering include the utilization of synthetic nucleic acid molecular devices as gene regulators, as disease marker detectors, and most recently, as therapeutic agents. The common thread between these technologies is their reliance on the detection of specific nucleic acid input markers to generate some desirable output, such as a change in the copy number of an mRNA (for gene regulation), a change in the emitted light intensity (for some diagnostics), and a change in cell state within an organism (for therapeutics). The research presented in this thesis likewise focuses on engineering molecular tools that detect specific nucleic acid inputs, and respond with useful outputs.

Four contributions to the field of nucleic acid engineering are presented: (1) the construction of a single nucleotide polymorphism (SNP) detector based on the mechanism of hybridization chain reaction (HCR); (2) the utilization of a single-stranded oligonucleotide molecular Scavenger as a means of enhancing HCR selectivity; (3) the implementation of Quenched HCR, a technique that facilitates transduction of a nucleic acid chemical input into an optical (light) output, and (4) the engineering of conditional probes that function as sequence transducers, receiving target signal as input and providing a sequence of choice as output. These programmable molecular systems are conceptually well-suited for performing wash-free, highly selective rapid genotyping and expression profiling in vitro, in situ, and potentially in living cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.

In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.

The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.

The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.