962 resultados para Imperfect CSIT


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluated the whole blood immunochromatographic card test (ICT card test) in a survey performed in Northeastern Brazil. 625 people were examined by the thick blood film (TBF) and ICT card test. Residents of a non-endemic area were also tested by the whole blood card test and Og4C3. The sensitivity of the ICT card test was 94.7% overall, but lower in females than males, based on the reasonable assumption that TBF is 100% specific. However, since TBF and other methods have unknown sensitivity, the true specificity of the card test is unknown. Nevertheless, it is possible to estimate upper and lower limits for the specificity, and relate it to the prevalence of the disease. In the endemic area, the possible range of the specificity was from 72.4% to 100%. 29.6% of the card tests performed in the non-endemic area exhibited faint lines that were interpreted as positives. Characteristics of the method including high sensitivity, promptness and simplicity justify its use for screening of filariasis. However, detailed information about the correct interpretation in case of extremely faint lines is essential. Further studies designed to consider problems arising from imperfect standards are necessary, as is a sounder diagnostic definition for the card test.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Num universo despovoado de formas geométricas perfeitas, onde proliferam superfícies irregulares, difíceis de representar e de medir, a geometria fractal revelou-se um instrumento poderoso no tratamento de fenómenos naturais, até agora considerados erráticos, imprevisíveis e aleatórios. Contudo, nem tudo na natureza é fractal, o que significa que a geometria euclidiana continua a ser útil e necessária, o que torna estas geometrias complementares. Este trabalho centra-se no estudo da geometria fractal e na sua aplicação a diversas áreas científicas, nomeadamente, à engenharia. São abordadas noções de auto-similaridade (exata, aproximada), formas, dimensão, área, perímetro, volume, números complexos, semelhança de figuras, sucessão e iterações relacionadas com as figuras fractais. Apresentam-se exemplos de aplicação da geometria fractal em diversas áreas do saber, tais como física, biologia, geologia, medicina, arquitetura, pintura, engenharia eletrotécnica, mercados financeiros, entre outras. Conclui-se que os fractais são uma ferramenta importante para a compreensão de fenómenos nas mais diversas áreas da ciência. A importância do estudo desta nova geometria, é avassaladora graças à sua profunda relação com a natureza e ao avançado desenvolvimento tecnológico dos computadores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Manipulator systems are rather complex and highly nonlinear which makes difficult their analysis and control. Classic system theory is veil known, however it is inadequate in the presence of strong nonlinear dynamics. Nonlinear controllers produce good results [1] and work has been done e. g. relating the manipulator nonlinear dynamics with frequency response [2–5]. Nevertheless, given the complexity of the problem, systematic methods which permit to draw conclusions about stability, imperfect modelling effects, compensation requirements, etc. are still lacking. In section 2 we start by analysing the variation of the poles and zeros of the descriptive transfer functions of a robot manipulator in order to motivate the development of more robust (and computationally efficient) control algorithms. Based on this analysis a new multirate controller which is an improvement of the well known “computed torque controller” [6] is announced in section 3. Some research in this area was done by Neuman [7,8] showing tbat better robustness is possible if the basic controller structure is modified. The present study stems from those ideas, and attempts to give a systematic treatment, which results in easy to use standard engineering tools. Finally, in section 4 conclusions are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação de mestrado em Direito Administrativo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I analyze the implications of bundling on price competition in a market for complementary products. Using a model of imperfect competition with product differentiation, I identify the incentives to bundle for two types of demand functions and study how they change with the size of the bundle. With an inelastic demand, bundling creates an advantage over uncoordinated rivals who cannot improve by bundling. I show that this no longer holds with an elastic demand. The incentives to bundle are stronger and the market outcome is symmetric bundling, the most competitive one. Profits are lowest and consumer surplus is maximized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expectations about the future are central for determination of current macroeconomic outcomes and the formulation of monetary policy. Recent literature has explored ways for supplementing the benchmark of rational expectations with explicit models of expectations formation that rely on econometric learning. Some apparently natural policy rules turn out to imply expectational instability of private agents’ learning. We use the standard New Keynesian model to illustrate this problem and survey the key results about interest-rate rules that deliver both uniqueness and stability of equilibrium under econometric learning. We then consider some practical concerns such as measurement errors in private expectations, observability of variables and learning of structural parameters required for policy. We also discuss some recent applications including policy design under perpetual learning, estimated models with learning, recurrent hyperinflations, and macroeconomic policy to combat liquidity traps and deflation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we examine the importance of imperfect competition in product and labour markets in determining the long-run welfare e¤ects of tax reforms assuming agent heterogeneneity in capital hold- ings. Each of these market failures, independently, results in welfare losses for at least a segment of the population, after a capital tax cut and a concurrent labour tax increase. However, when combined in a realistic calibration to the UK economy, they imply that a capital tax cut will be Pareto improving in the long run. Consistent with the the- ory of second-best, the two distortions in this context work to correct the negative distributional e¤ects of a capital tax cut that each one, on its own, creates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a DSGE model in which long run inflation risk matters for social welfare. Aggregate and welfare effects of long run inflation risk are assessed under two monetary regimes: inflation targeting (IT) and price-level targeting (PT). These effects differ because IT implies base-level drift in the price level, while PT makes the price level stationary around a target price path. Under IT, the welfare cost of long run inflation risk is equal to 0.35 percent of aggregate consumption. Under PT, where long run inflation risk is largely eliminated, it is lowered to only 0.01 per cent. There are welfare gains from PT because it raises average consumption for the young and lowers consumption risk substantially for the old. These results are strongly robust to changes in the PT target horizon and fairly robust to imperfect credibility, fiscal policy, and model calibration. While the distributional effects of an unexpected transition to PT are sizeable, they are short-lived and not welfare-reducing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a DSGE model in which long run inflation risk matters for social welfare. Optimal indexation of long-term government debt is studied under two monetary policy regimes: inflation targeting (IT) and price-level targeting (PT). Under IT, full indexation is optimal because long run inflation risk is substantial due to base-level drift, making indexed bonds a much better store of value than nominal bonds. Under PT, where long run inflation risk is largely eliminated, optimal indexation is substantially lower because nominal bonds become a better store of value relative to indexed bonds. These results are robust to the PT target horizon, imperfect credibility of PT and model calibration, but the assumption that indexation is lagged is crucial. From a policy perspective, a key finding is that accounting for optimal indexation has important welfare implications for comparisons of IT and PT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce attention games. Alternatives ranked by quality (producers, politicians, sexual partners...) desire to be chosen and compete for the imperfect attention of a chooser by investing in their own salience. We prove that if alternatives can control the attention they get, then ”the showiest is the best”: the equilibrium ordering of salience (weakly) reproduces the quality ranking and the best alternative is the one that gets picked most often. This result also holds under more general conditions. However, if those conditions fail, then even the worst alternative can be picked most often.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we analyze the persistence of aggregate real exchange rates (RERs) for a group of EU-15 countries by using sectoral data. The tight relation between aggregate and sectoral persistence recently investigated by Mayoral (2008) allows us to decompose aggregate RER persistence into the persistence of its different subcomponents. We show that the distribution of sectoral persistence is highly heterogeneous and very skewed to the right, and that a limited number of sectors are responsible for the high levels of persistence observed at the aggregate level. We use quantile regression to investigate whether the traditional theories proposed to account for the slow reversion to parity (lack of arbitrage due to nontradibilities or imperfect competition and price stickiness) are able to explain the behavior of the upper quantiles of sectoral persistence. We conclude that pricing to market in the intermediate goods sector together with price stickiness have more explanatory power than variables related to the tradability of the goods or their inputs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.