965 resultados para Eventual consistency
Resumo:
Drug resistance associated with the treatment of human schistosomiasis appears to be an emerging problem requiring more attention from the scientific community than the subject currently receives. Drug-resistant strains of Schistosoma mansoni have been isolated by various investigators as a result of laboratory experimentation or from a combination of field and laboratory studies. Review of this data appears to indicate that the lack of susceptibility observed for some of the isolated strains cannot be ascribed solely to previous administration of antischistosome drugs and thus further studies are required to elucidate this phenomena. Strains of S. mansoni have now been identified from Brazil which are resistant to oxamniquine, hycanthone and niridazole; from Puerto Rico which are resistant to hycanthone and oxamniquine; and from Kenya which are resistant to niridazole and probably oxamniquine. Strains derived by in vitro selection and resistant to oxamniquine and possibly to oltipraz are also available. All of these strains are currently maintained in the laboratory in snails and mice, thus providing for the first time an opportunity for indepth comparative studies. Preliminary data indicates that S. haematobium strains resistant to metrifonate may be occurring in Kenya. This problem could poise great difficulty in the eventual development of antischistosomal agents. Biomphalaria glabrata from Puerto Rico and Brazil were found to be susceptible to drug-resistant S. mansoni from each country.
Resumo:
In the 1992 Barcelona Olympic Games, besides the country-versus-country typical showdown in the backdrop of Olympics, and the economic and political impulses for the host city, there was a third factor complicating the celebration: rivalry between the Catalan hosts and the Spanish state. By guiding the development of and eventual Olympic projection of national identity, Catalan and Spanish politicians hoped to create a resounding rallying point around which they could unite disparate individuals. The text is made up of: an introduction, five paragraphs on the different political perspective, conclusions, a commentary on the tallying of score and a note section with bibliographical references.
Resumo:
Generalized multiresolution analyses are increasing sequences of subspaces of a Hilbert space H that fail to be multiresolution analyses in the sense of wavelet theory because the core subspace does not have an orthonormal basis generated by a fixed scaling function. Previous authors have studied a multiplicity function m which, loosely speaking, measures the failure of the GMRA to be an MRA. When the Hilbert space H is L2(Rn), the possible multiplicity functions have been characterized by Baggett and Merrill. Here we start with a function m satisfying a consistency condition which is known to be necessary, and build a GMRA in an abstract Hilbert space with multiplicity function m.
Resumo:
Xenodiagnósticos con Lutzomya yungi aplicados en los bordes de las úlceras de pacientes infectados con Leishmania braziliensis antes y después del tratamiento con 10 dosis de antimonial pentavalente y un aminoglicósido, evidencian la condición reservoria de leishmanias del enfermo, para flebótomos endofágicos y la utilidad de un tratamiento específico-temprano que no solamente conduce a la curación clínica, sino a la eliminación del riesgo de una eventual transmisión intradomiciliar por insectos que pican dentro del domicilio durante la noche.
Resumo:
Report for the scientific sojourn at the University of Linköping between April to July 2007. Monitoring of the air intake system of an automotive engine is important to meet emission related legislative diagnosis requirements. During the research the problem of fault detection in the air intake system was stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem was solved using Interval-based Consistency Techniques. Interval-based consistency techniques are shown to be particularly efficient for checking the consistency of the Analytical Redundancy Relations (ARRs), dealing with uncertain measurements and parameters, and using experimental data. All experiments were performed on a four-cylinder turbo-charged spark-ignited SAAB engine located in the research laboratory at Vehicular System Group - University of Linköping.
Resumo:
Recent attempts to incorporate optimal fiscal policy into New Keynesian models subject to nominal inertia, have tended to assume that policy makers are benevolent and have access to a commitment technology. A separate literature, on the New Political Economy, has focused on real economies where there is strategic use of policy instruments in a world of political conflict. In this paper we combine these literatures and assume that policy is set in a New Keynesian economy by one of two policy makers facing electoral uncertainty (in terms of infrequent elections and an endogenous voting mechanism). The policy makers generally share the social welfare function, but differ in their preferences over fiscal expenditure (in its size and/or composition). Given the environment, policy shall be realistically constrained to be time-consistent. In a sticky-price economy, such heterogeneity gives rise to the possibility of one policy maker utilising (nominal) debt strategically to tie the hands of the other party, and influence the outcome of any future elections. This can give rise to a deficit bias, implying a sub-optimally high level of steady-state debt, and can also imply a sub-optimal response to shocks. The steady-state distortions and inflation bias this generates, combined with the volatility induced by the electoral cycle in a sticky-price environment, can significantly
Resumo:
If choices depend on the decision maker's mood, is the attempt to derive any consistency in choice doomed? In this paper we argue that, even with full unpredictability of mood, the way choices from a menu relate to choices from another menu exhibits some structure. We present two alternative models of 'moody choice' and show that, in either of them, not all choice patterns are possible. Indeed, we characterise both models in terms of consistency requirements of the observed choice data.
Resumo:
Executive Summary Many commentators have criticised the strategy currently used to finance the Scottish Parliament – both the block grant system, and the small degree of fiscal autonomy devised in the Calman report and the UK government’s 2009 White Paper. Nevertheless, fiscal autonomy has now been conceded in principle. This paper sets out to identify formally what level of autonomy would be best for the Scottish economy and the institutional changes needed to support that arrangement. Our conclusions are in line with the Steel Commission: that significantly more fiscal powers need to be transferred to Scotland. But what we can then do, which the Steel Commission could not, is to give a detailed blueprint for how this proposal might be implemented in practice. We face two problems. The existing block grant system can and has been criticised from such a wide variety of points of view that it effectively has no credibility left. On the other hand, the Calman proposals (and the UK government proposals that followed) are unworkable because, to function, they require information that the policy makers cannot possibly have; and because, without borrowing for current activities, they contain no mechanism to reconcile contractual spending (most of the budget) with variable revenue flows – which is to invite an eventual breakdown. But in its attempt to fix these problems, the UK White Paper introduces three further difficulties: new grounds for quarrels between the UK and Scottish governments, a long term deflation bias, and a loss of devolution.
Resumo:
Estudi realitzat a partir d’una estada al Physics Department de la New York University, United States, Estats Units, entre 2006 i 2008. Una de les observacions de més impacte en la cosmologia moderna ha estat la determinació empírica que l’Univers es troba actualment en una fase d’Expansió Accelerada (EA). Aquest fenòmen implica que o bé l’Univers està dominat per un nou sector de matèria/energia, o bé la Relativitat General deixa de tenir validesa a escales cosmològiques. La primera possibilitat comprèn els models d’Energia Fosca (EF), i el seu principal problema és que l’EF ha de tenir propietats tan especials que es fan difícils de justificar teòricament. La segona possibilitat requereix la construcció de teories consistents de Gravetat Modificada a Grans Distàncies (GMGD), que són una generalització dels models de gravetat massiva. L’interès fenomenològic per aquestes teories també va resorgir amb l’aparició dels primers exemples de models de GMGD, com ara el model de Dvali, Gabadadze i Porrati (DGP), que consisteix en un tipus de brana en una dimensió extra. Malauradament, però, aquest model no permet explicar de forma consistent l’EA de l’Univers. Un dels objectius d’aquest projecte ha estat establir la viabilitat interna i fenomenològica dels models de GMGD. Des del punt de vista fenomenològic, ens hem centrat en la questió més important a la pràctica: trobar signatures observacionals que permetin distingir els models de GMGD dels d’EF. A nivell més teòric, també hem investigat el significat de les inestabilitats del model DGP.L’altre gran objectiu que ens vam proposar va ser la construcció de noves teories de GMGD. En la segona part d’aquest projecte, hem elaborat i mostrat la consistència del model “DGP en Cascada”, que generalitza el model DGP a més dimensions extra, i representa el segon model consistent i invariant-Lorentz a l’espai pla conegut. L’existència d’altres models de GMGD més enllà de DGP és de gran interès atès que podria permetre obtenir l’EA de l’Univers de forma purament geomètrica.
Resumo:
This paper revisits the argument that the stabilisation bias that arises under discretionary monetary policy can be reduced if policy is delegated to a policymaker with redesigned objectives. We study four delegation schemes: price level targeting, interest rate smoothing, speed limits and straight conservatism. These can all increase social welfare in models with a unique discretionary equilibrium. We investigate how these schemes perform in a model with capital accumulation where uniqueness does not necessarily apply. We discuss how multiplicity arises and demonstrate that no delegation scheme is able to eliminate all potential bad equilibria. Price level targeting has two interesting features. It can create a new equilibrium that is welfare dominated, but it can also alter equilibrium stability properties and make coordination on the best equilibrium more likely.
Resumo:
This paper studies the behavior of a central bank that seeks to conduct policy optimally while having imperfect credibility and harboring doubts about its model. Taking the Smets-Wouters model as the central bank.s approximating model, the paper's main findings are as follows. First, a central bank.s credibility can have large consequences for how policy responds to shocks. Second, central banks that have low credibility can bene.t from a desire for robustness because this desire motivates the central bank to follow through on policy announcements that would otherwise not be time-consistent. Third, even relatively small departures from perfect credibility can produce important declines in policy performance. Finally, as a technical contribution, the paper develops a numerical procedure to solve the decision-problem facing an imperfectly credible policymaker that seeks robustness.
Resumo:
This paper presents an axiomatic characterization of difference-form group contests, that is, contests fought among groups and where their probability of victory depends on the difference of their effective efforts. This axiomatization rests on the property of Equalizing Consistency, stating that the difference between winning probabilities in the grand contest and in the smaller contest should be identical across all participants in the smaller contest. This property overcomes some of the drawbacks of the widely-used ratio-form contest success functions. Our characterization shows that the criticisms commonly-held against difference-form contests success functions, such as lack of scale invariance and zero elasticity of augmentation, are unfounded.By clarifying the properties of this family of contest success functions, this axiomatization can help researchers to find the functional form better suited to their application of interest.
Resumo:
In the context of the two-stage threshold model of decision making, with the agent’s choices determined by the interaction Of three “structural variables,” we study the restrictions on behavior that arise when one or more variables are xogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial Knowledge, and for both single- and multivalued choice functions.
Resumo:
Time-inconsistency is an essential feature of many policy problems (Kydland and Prescott, 1977). This paper presents and compares three methods for computing Markov-perfect optimal policies in stochastic nonlinear business cycle models. The methods considered include value function iteration, generalized Euler-equations, and parameterized shadow prices. In the context of a business cycle model in which a scal authority chooses government spending and income taxation optimally, while lacking the ability to commit, we show that the solutions obtained using value function iteration and generalized Euler equations are somewhat more accurate than that obtained using parameterized shadow prices. Among these three methods, we show that value function iteration can be applied easily, even to environments that include a risk-sensitive scal authority and/or inequality constraints on government spending. We show that the risk-sensitive scal authority lowers government spending and income-taxation, reducing the disincentive households face to accumulate wealth.
Resumo:
OBJECTIVES: Advances in biopsychosocial science have underlined the importance of taking social history and life course perspective into consideration in primary care. For both clinical and research purposes, this study aims to develop and validate a standardised instrument measuring both material and social deprivation at an individual level. METHODS: We identified relevant potential questions regarding deprivation using a systematic review, structured interviews, focus group interviews and a think-aloud approach. Item response theory analysis was then used to reduce the length of the 38-item questionnaire and derive the deprivation in primary care questionnaire (DiPCare-Q) index using data obtained from a random sample of 200 patients during their planned visits to an ambulatory general internal medicine clinic. Patients completed the questionnaire a second time over the phone 3 days later to enable us to assess reliability. Content validity of the DiPCare-Q was then assessed by 17 general practitioners. Psychometric properties and validity of the final instrument were investigated in a second set of patients. The DiPCare-Q was administered to a random sample of 1898 patients attending one of 47 different private primary care practices in western Switzerland along with questions on subjective social status, education, source of income, welfare status and subjective poverty. RESULTS: Deprivation was defined in three distinct dimensions: material (eight items), social (five items) and health deprivation (three items). Item consistency was high in both the derivation (Kuder-Richardson Formula 20 (KR20) =0.827) and the validation set (KR20 =0.778). The DiPCare-Q index was reliable (interclass correlation coefficients=0.847) and was correlated to subjective social status (r(s)=-0.539). CONCLUSION: The DiPCare-Q is a rapid, reliable and validated instrument that may prove useful for measuring both material and social deprivation in primary care.