967 resultados para explicit categorization
Resumo:
Recent work on optimal monetary and fiscal policy in New Keynesian models suggests that it is optimal to allow steady-state debt to follow a random walk. Leith and Wren-Lewis (2012) consider the nature of the timeinconsistency involved in such a policy and its implication for discretionary policy-making. We show that governments are tempted, given inflationary expectations, to utilize their monetary and fiscal instruments in the initial period to change the ultimate debt burden they need to service. We demonstrate that this temptation is only eliminated if following shocks, the new steady-state debt is equal to the original (efficient) debt level even though there is no explicit debt target in the government’s objective function. Analytically and in a series of numerical simulations we show which instrument is used to stabilize the debt depends crucially on the degree of nominal inertia and the size of the debt-stock. We also show that the welfare consequences of introducing debt are negligible for precommitment policies, but can be significant for discretionary policy. Finally, we assess the credibility of commitment policy by considering a quasi-commitment policy which allows for different probabilities of reneging on past promises. This on-line Appendix extends the results of this paper.
Resumo:
We describe an explicit relationship between strand diagrams and piecewise-linear functions for elements of Thompson’s group F. Using this correspondence, we investigate the dynamics of elements of F, and we show that conjugacy of one-bump functions can be described by a Mather-type invariant.
Resumo:
En els darrers anys Espanya, i concretament Catalunya, s'ha convertit en el país amb major nombre d'adopcions per habitant del món. Aquest fet ens duu a plantejar-nos quines són les motivacions que impulsen als sol•licitants a voler adoptar. En aquest estudi fem una categorització de les principals motivacions inicials de l'adopció internacional manifestades en els processos de valoració per a l'obtenció del Certificat d'Idoneïtat en una mostra catalana de 331 persones. Els resultats obtinguts ens indiquen que la majoria de sol•licitants adopta pensant en les seves necessitats i en les seves dificultats per a procrear i no tant en el menor que volen adoptar.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.
Resumo:
I put forward a concise and intuitive formula for the calculation of the valuation for a good in the presence of the expectation that further, related, goods will soon become available. This valuation is tractable in the sense that it does not require the explicit resolution of the consumerís life-time problem.
Resumo:
Domestic action on climate change is increasingly important in the light of the difficulties with international agreements and requires a combination of solutions, in terms of institutions and policy instruments. One way of achieving government carbon policy goals may be the creation of an independent body to advise, set or monitor policy. This paper critically assesses the Committee on Climate Change (CCC), which was created in 2008 as an independent body to help move the UK towards a low carbon economy. We look at the motivation for its creation in terms of: information provision, advice, monitoring, or policy delegation. In particular we consider its ability to overcome a time inconsistency problem by comparing and contrasting it with another independent body, the Monetary Policy Committee of the Bank of England. In practice the Committee on Climate Change appears to be the ‘inverse’ of the Monetary Policy Committee, in that it advises on what the policy goal should be rather than being responsible for achieving it. The CCC incorporates both advisory and monitoring functions to inform government and achieve a credible carbon policy over a long time frame. This is a similar framework to that adopted by Stern (2006), but the CCC operates on a continuing basis. We therefore believe the CCC is best viewed as a "Rolling Stern plus" body. There are also concerns as to how binding the budgets actually are and how the budgets interact with other energy policy goals and instruments, such as Renewable Obligation Contracts and the EU Emissions Trading Scheme. The CCC could potentially be reformed to include: an explicit information provision role; consumption-based accounting of emissions and control of a policy instrument such as a balanced-budget carbon tax.
Resumo:
This note reviews the political-scientific literature on European competition policy (ECP) in the 2000s. Based on a data set extracted from four well-known journals, and using an upfront methodology and explicit criteria, it analyzes the literature both quantitatively and qualitatively. On the quantitative side, it shows that, although a few sub-policy areas are still neglected, ECP is not the under-researched policy it used to be. On the qualitative side, the literature has greatly improved since the 1990s: Almost all articles now present a clear research question, and most advance specific theoretical claims/hypotheses. Yet, improvements can be made on research design, statistical testing, and, above all, state-of-the-art theorizing (e.g. in the game-theoretical treatment of delegation problems). Indeed, it is paradoxical that ECP specialists do not pay more attention to theoretical questions which are so central to the actual policy area they study.
Resumo:
Rapport de synthèse : Cette recherche s'intéresse (1) au port et à l'utilisation d'armes chez les adolescents ainsi que (2) aux rôles des facteurs environnementaux et individuels dans la violence juvénile. Les données étaient tirés de SMASH 2002 (Swiss multicenter adolescent survey on health 2002), étude dans laquelle un échantillon représentatif de 7548 étudiants et apprentis âgés entre 16 et 20 ans vivant en Suisse ont été interrogés Dans une première étude, les adolescents ayant porté une arme (couteau, masse, coup de poing américain, pistolet/autre arme à feu, spray) durant l'année précédant l'enquête étaient comparés avec ceux n'ayant pas porté d'arme. Ensuite, dans le sous-échantillon de porteurs d'armes, ceux ayant uniquement porté l'arme étaient comparés avec ceux ayant utilisé une arme dans une bagarre. Des facteurs individuels, familiaux, scolaires et sociaux ont été étudiés à l'aide d'analyses bivariées et multivariées. 13.7% des jeunes vivant en Suisse ont porté une arme dans l'année précédant l'enquête. 6.2% des filles porteuses d'armes et 19.9% des garçons porteurs d'armes ont fait usage de l'arme dans une bagarre. Chez les garçons et chez les filles, les porteurs d'armes étaient plus souvent délinquants et victimes de violence physique. Les garçons porteurs d'armes étaient plus souvent des apprentis, à la recherche de sensations fortes, porteurs de tatouages, avaient une mauvaise relation avec leurs parents, étaient dans des bagarres sous l'influence de substances, et avaient des relations sexuelles à risque. En comparaison avec les porteuses d'armes, les filles utilisatrices d'armes étaient plus souvent fumeuses quotidiennes. Les garçons ayant utilisé leur arme étaient plus souvent nés à l'étranger, vivaient dans un milieu urbain, étaient des apprentis, avaient un mauvais contexte scolaire, avaient des relations sexuelles à risque et étaient impliqués dans des bagarres sous l'influence de substances. Nos résultats montrent que porter une arme est un comportement relativement fréquent chez les adolescents vivant en Suisse et qu'une proportion non négligeable de ces porteurs d'armes ont utilisé l'arme dans une bagarre. De ce fait, une discussion sur le port d'arme devrait être incluse dans l'entretien clinique ainsi que dans les programmes de prévention visant les adolescents. Dans une deuxième étude, la violence juvénile était définie comme présente si l'adolescent avait commis au moins un des quatre délits suivants durant l'année précédant l'enquête: attaquer un adulte, arracher ou voler quelque chose, porter une arme ou utiliser une arme dans une bagarre. Des niveaux écologiques étaient testés et résultaient en un modèle à trois niveaux pour les garçons (niveau individuel, niveau classe et niveau école) et, à cause d'une faible prévalence de la violence chez les filles, en un modèle à un niveau (individuel) pour les filles. Des variables dépendantes étaient attribuées à chaque niveau, en se basant sur la littérature. Le modèle multiniveaux des garçons montrait que le niveau école (10%) et le niveau classe (24%) comptaient pour plus d'un tiers de la variance inter-individuelle dans le comportement violent. Les facteurs associés à ce comportement chez les filles étaient être victime de violence physique et la recherche de sensations fortes. Pour les garçons, les facteurs explicatifs de la violence étaient pratiquer des relations sexuelles à risque, être à la recherche de sensations fortes, être victime de violence physique, avoir une mauvaise relation avec les parents, être déprimé et vivre dans une famille monoparentale au niveau individuel, la violence et les actes antisociaux au niveau de la classe et être apprenti au niveau de l'école. Des interventions au niveau de la classe ainsi qu'un règlement explicit en ce qui concerne la violence et d'autres comportements à risque dans des écoles devraient être prioritaires pour la prévention de la violence chez les adolescents. En outre, la prévention devrait tenir compte des différences entre les sexes.
Resumo:
Fixed delays in neuronal interactions arise through synaptic and dendritic processing. Previous work has shown that such delays, which play an important role in shaping the dynamics of networks of large numbers of spiking neurons with continuous synaptic kinetics, can be taken into account with a rate model through the addition of an explicit, fixed delay. Here we extend this work to account for arbitrary symmetric patterns of synaptic connectivity and generic nonlinear transfer functions. Specifically, we conduct a weakly nonlinear analysis of the dynamical states arising via primary instabilities of the stationary uniform state. In this way we determine analytically how the nature and stability of these states depend on the choice of transfer function and connectivity. While this dependence is, in general, nontrivial, we make use of the smallness of the ratio in the delay in neuronal interactions to the effective time constant of integration to arrive at two general observations of physiological relevance. These are: 1 - fast oscillations are always supercritical for realistic transfer functions. 2 - Traveling waves are preferred over standing waves given plausible patterns of local connectivity.
Resumo:
This paper provides an explicit cofibrant resolution of the operad encoding Batalin-Vilkovisky algebras. Thus it defines the notion of homotopy Batalin-Vilkovisky algebras with the required homotopy properties. To define this resolution we extend the theory of Koszul duality to operads and properads that are defined by quadratic and linear relations. The operad encoding Batalin-Vilkovisky algebras is shown to be Koszul in this sense. This allows us to prove a Poincaré-Birkhoff-Witt Theorem for such an operad and to give an explicit small quasi-free resolution for it. This particular resolution enables us to describe the deformation theory and homotopy theory of BV-algebras and of homotopy BV-algebras. We show that any topological conformal field theory carries a homotopy BV-algebra structure which lifts the BV-algebra structure on homology. The same result is proved for the singular chain complex of the double loop space of a topological space endowed with an action of the circle. We also prove the cyclic Deligne conjecture with this cofibrant resolution of the operad BV. We develop the general obstruction theory for algebras over the Koszul resolution of a properad and apply it to extend a conjecture of Lian-Zuckerman, showing that certain vertex algebras have an explicit homotopy BV-algebra structure.
Resumo:
Real-world objects are often endowed with features that violate Gestalt principles. In our experiment, we examined the neural correlates of binding under conflict conditions in terms of the binding-by-synchronization hypothesis. We presented an ambiguous stimulus ("diamond illusion") to 12 observers. The display consisted of four oblique gratings drifting within circular apertures. Its interpretation fluctuates between bound ("diamond") and unbound (component gratings) percepts. To model a situation in which Gestalt-driven analysis contradicts the perceptually explicit bound interpretation, we modified the original diamond (OD) stimulus by speeding up one grating. Using OD and modified diamond (MD) stimuli, we managed to dissociate the neural correlates of Gestalt-related (OD vs. MD) and perception-related (bound vs. unbound) factors. Their interaction was expected to reveal the neural networks synchronized specifically in the conflict situation. The synchronization topography of EEG was analyzed with the multivariate S-estimator technique. We found that good Gestalt (OD vs. MD) was associated with a higher posterior synchronization in the beta-gamma band. The effect of perception manifested itself as reciprocal modulations over the posterior and anterior regions (theta/beta-gamma bands). Specifically, higher posterior and lower anterior synchronization supported the bound percept, and the opposite was true for the unbound percept. The interaction showed that binding under challenging perceptual conditions is sustained by enhanced parietal synchronization. We argue that this distributed pattern of synchronization relates to the processes of multistage integration ranging from early grouping operations in the visual areas to maintaining representations in the frontal networks of sensory memory.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
The main goal of this article is to give an explicit rigid analytic uniformization of the maximal toric quotient of the Jacobian of a Shimura curve over Q at a prime dividing exactly the level. This result can be viewed as complementary to the classical theorem of Cerednik and Drinfeld which provides rigid analytic uniformizations at primes dividing the discriminant. As a corollary, we offer a proof of a conjecture formulated by M. Greenberg in hispaper on Stark-Heegner points and quaternionic Shimura curves, thus making Greenberg's construction of local points on elliptic curves over Q unconditional.
Resumo:
BACKGROUND: Efforts to decrease overuse of health care may result in underuse. Overuse and underuse of colonoscopy have never been simultaneously evaluated in the same patient population. METHODS: In this prospective observational study, the appropriateness and necessity of referral for colonoscopy were evaluated by using explicit criteria developed by a standardized expert panel method. Inappropriate referrals constituted overuse. Patients with necessary colonoscopy indications who were not referred constituted underuse. Consecutive ambulatory patients with lower gastrointestinal (GI) symptoms from 22 general practices in Switzerland, a country with ready access to colonoscopy, were enrolled during a 4-week period. Follow-up data were obtained at 3 months for patients who did not undergo a necessary colonoscopy. RESULTS: Eight thousand seven hundred sixty patient visits were screened for inclusion; 651 patients (7.4%) had lower GI symptoms (mean age 56.4 years, 68% women). Of these, 78 (12%) were referred for colonoscopy. Indications for colonoscopy in 11 patients (14% of colonoscopy referrals or 1.7% of all patients with lower GI symptoms) were judged inappropriate. Among 573 patients not referred for the procedure, underuse ranged between 11% and 28% of all patients with lower GI symptoms, depending on the criteria used. CONCLUSIONS: Applying criteria from an expert panel of nationally recognized experts indicates that underuse of referral for colonoscopy exceeds overuse in primary care in Switzerland. To improve quality of care, both overuse and underuse of important procedures must be addressed.