920 resultados para New Keynesian model, Bayesian methods, Monetary policy, Great Inflation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National guidance and clinical guidelines recommended multidisciplinary teams (MDTs) for cancer services in order to bring specialists in relevant disciplines together, ensure clinical decisions are fully informed, and to coordinate care effectively. However, the effectiveness of cancer teams was not previously evaluated systematically. A random sample of 72 breast cancer teams in England was studied (548 members in six core disciplines), stratified by region and caseload. Information about team constitution, processes, effectiveness, clinical performance, and members' mental well-being was gathered using appropriate instruments. Two input variables, team workload (P=0.009) and the proportion of breast care nurses (P=0.003), positively predicted overall clinical performance in multivariate analysis using a two-stage regression model. There were significant correlations between individual team inputs, team composition variables, and clinical performance. Some disciplines consistently perceived their team's effectiveness differently from the mean. Teams with shared leadership of their clinical decision-making were most effective. The mental well-being of team members appeared significantly better than in previous studies of cancer clinicians, the NHS, and the general population. This study established that team composition, working methods, and workloads are related to measures of effectiveness, including the quality of clinical care. © 2003 Cancer Research UK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation examines the drivers and implications of international capital flows. The overarching motivation is the observation that countries not at the centre of global financial markets are subject to considerable spillovers from centre countries, notably from their monetary policy. I present new empirical evidence on the determinants of the observed patterns of international capital flows and monetary policy spillovers, and study their effect on both financial markets and the real economy. In Chapter 2 I provide evidence on the determinants of a puzzling negative correlation observed between productivity growth and net capital inflows to developing and emerging market economies (EMEs) since 1980. By disaggregating net capital inflows into their gross components, I show that this negative correlation is explained by capital outflows related to purchases of very liquid assets from the fastest growing countries. My results suggest a desire for international portfolio diversification in liquid assets by fast growing countries is driving much of the original puzzle. In the reminder of my dissertation I pivot to study the foreign characteristics that drive international capital flows and monetary policy spillovers, with a particular focus on the role of unconventional monetary policy in the United States (U.S.). In Chapter 3 I show that a significant portion of the heterogeneity in EMEs' asset price adjustment following the quantitative easing operations by the Federal Reserve (the Fed) during 2008-2014 can be explained by the degree of bilateral capital market frictions between these countries and the U.S. This is true even after accounting for capital controls, exchange rate regimes, and domestic monetary policies. Chapter 4, co-authored with Michal Ksawery Popiel, studies unconventional monetary policy in a small open economy, looking specifically at the case of Canada since the global financial crisis. We quantify the effect Canadian unconventional monetary policy shocks had on the real economy, while carefully controlling for and quantifying spillovers from U.S. unconventional monetary policy. Our results indicate that the Bank of Canada's unconventional monetary policy increased Canadian output significantly from 2009-2010, but that spillovers from the Fed's policy were even more important for increasing Canadian output after 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a new reading of a classical economic relation: the short-run Phillips curve. Our point is that, when dealing with inflation and unemployment, policy-making can be understood as a multicriteria decisionmaking problem. Hence, we use so-called multiobjective programming in connection with a computable general equilibrium (CGE) model to determine the combinations of policy instruments that provide efficient combinations of inflation and unemployment. This approach results in an alternative version of the Phillips curve labelled as efficient Phillips curve. Our aim is to present an application of CGE models to a new area of research that can be especially useful when addressing policy exercises with real data. We apply our methodological proposal within a particular regional economy, Andalusia, in the south of Spain. This tool can give some keys for policy advice and policy implementation in the fight against unemployment and inflation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper confirms the importance of the financial systems behaviour conditions to the credit channel of monetary policy in the entire European Union (EU). It uses panel fixed- effect estimations and quarterly data for 26 EU countries for the period from Q1 1999 to Q3 2006 in an adaptation of the Bernanke and Blinder (1988) model. The findings also reveal the high degree of foreign dependence and indebtedness of the EU banking institutions and their similar reactions to the macroeconomic and the monetary policy environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Context Percutaneous vertebroplasty (PVP) is a minimally invasive surgical procedure and is frequently performed in humans who need surgical treatment of vertebral fractures. PVP involves cement injection into the vertebral body, thereby providing rapid and significant pain relief. Purpose The testing of novel biomaterials depends on suitable animal models. The aim of this study was to develop a reproducible and safe model of PVP in sheep. Study Design This study used ex vivo and in vivo large animal model study (Merino sheep). Methods Ex vivo vertebroplasty was performed through a bilateral modified parapedicular access in 24 ovine lumbar hemivertebrae, divided into four groups (n=6). Cerament (Bone Support, Lund, Sweden) was the control material. In the experimental group, a novel composite was tested—Spine-Ghost—which consisted of an alpha-calcium sulfate matrix enriched with micrometric particles of mesoporous bioactive glass. All vertebrae were assessed by micro-computed tomography (micro-CT) and underwent mechanical testing. For the in vivo study, 16 sheep were randomly allocated into control and experimental groups (n=8), and underwent PVP using the same bone cements. All vertebrae were assessed postmortem by micro-CT, histology, and reverse transcription-polymerase chain reaction (rt-PCR). This work has been supported by the European Commission under the 7th Framework Programme for collaborative projects (600,000–650,000 USD). Results In the ex vivo model, the average defect volume was 1,275.46±219.29 mm3. Adequate defect filling with cement was observed. No mechanical failure was observed under loads which were higher than physiological. In the in vivo study, cardiorespiratory distress was observed in two animals, and one sheep presented mild neurologic deficits in the hind limbs before recovering. Conclusions The model of PVP is considered suitable for preclinical in vivo studies, mimicking clinical application. All sheep recovered and completed a 6-month implantation period. There was no evidence of cement leakage into the vertebral foramen in the postmortem examination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Marfan syndrome is an autosomal dominant disease of connective tissue caused by mutations in the fibrillin-1 encoding gene FBN1. Patients present cardiovascular, ocular and skeletal manifestations, and although being fully penetrant, MFS is characterized by a wide clinical variability both within and between families. Here we describe a new mouse model of MFS that recapitulates the clinical heterogeneity of the syndrome in humans. Heterozygotes for the mutant Fbn1 allele mg Delta(loxPneo), carrying the same internal deletion of exons 19-24 as the mg Delta mouse model, present defective microfibrillar deposition, emphysema, deterioration of aortic wall and kyphosis. However, the onset of a clinical phenotypes is earlier in the 129/Sv than in C57BL/6 background, indicating the existence of genetic modifiers of MFS between these two mouse strains. In addition, we characterized a wide clinical variability within the 129/Sv congenic heterozygotes, suggesting involvement of epigenetic factors in disease severity. Finally, we show a strong negative correlation between overall levels of Fbn1 expression and the severity of the phenotypes, corroborating the suggested protective role of normal fibrillin-1 in MFS pathogenesis, and supporting the development of therapies based on increasing Fbn1 expression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new integrable model for correlated electrons which is based on so(5) symmetry. By using an eta-pairing realization we construct eigenstates of the Hamiltonian with off-diagonal long-range order. It is also shown that these states lie in the ground state sector. We exactly solve the model on a one-dimensional lattice by the Bethe ansatz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The origin of M32, the closest compact elliptical galaxy (cE), is a long-standing puzzle of galaxy fort-nation in the Local Group. Our N-body/smoothed particle hydrodynamics simulations suggest a new scenario in which the strong tidal field of M31 can transform a spiral galaxy into a compact elliptical galaxy. As a low-luminosity spiral galaxy plunges into the central region of M31, most of the outer stellar and gaseous components of its disk are dramatically stripped as a result of M31's tidal field. The central bulge component, on the other hand, is just weakly influenced by the tidal field, owing to its compact configuration, and retains its morphology. M31's strong tidal field also induces rapid gas transfer to the central region, triggers a nuclear starburst, and consequently forms the central high-density and more metal-rich stellar populations with relatively young ages. Thus, in this scenario, M32 was previously the bulge of a spiral galaxy tidally interacting with M31 several gigayears ago. Furthermore, we suggest that cE's like M32 are rare, the result of both the rather narrow parameter space for tidal interactions that morphologically transform spiral galaxies into cE's and the very short timescale (less than a few times 10(9) yr) for cE's to be swallowed by their giant host galaxies (via dynamical friction) after their formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To develop an experimental model in rabbits to analyse the efficiency of extracorporeal shock wave therapy (ESWT) for Peyronie`s disease. We used 15 adult male rabbits divided into three equal groups. In group 1 (no penile ESWT) rabbits had three sessions of ESWT with 2000 shocks each (15 kV), but a rubber mat was placed between the shock head and rabbit to protect the penis; the rabbits were killed at 7 days after the last session of ESWT. In group 2 the rabbits had three sessions of ESWT using the same parameters, and were killed immediately after the last session to analyse the penis. In group 3 the rabbits had three sessions of ESWT as before but were killed at 7 days after the last session, and the penile tissue analysed macroscopically and histologically. The results showed clearly that the model was efficient, creating a similar situation to that when applying ESWT in the human penis. All of the rabbits in groups 2 and 3 had haematomas and diffuse petechiae after ESWT, and only four had urethral and penile bleeding. Almost all macroscopic changes disappeared after 48 h and only one rabbit in group 3 after 7 days had a haematoma on the dorsal penile surface. The histology (assessed using haematoxylin and eosin staining) of the cavernous body of the penis showed: unchanged histology in group 1; in group 2 there was a dilated and congested vascular space in the cavernous body, with interstitial extensive bleeding in the dermis; and in group 3 there was an increase in interstitial fibrous tissue in the cavernous septum, with deposition of collagen fibres and thickening of the tunica albuginea. The present model was efficient in producing tissue injury in the normal penis when treated with ESWT, suggesting that this promising model should be considered for use future studies of Peyronie`s disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We assessed a new experimental model of isolated right ventricular (RV) failure, achieved by means of intramyocardial injection of ethanol. RV dysfunction was induced in 13 mongrel dogs via multiple injections of 96% ethanol (total dose 1 mL/kg), all over the inlet and trabecular RV free walls. Hemodynamic and metabolic parameters were evaluated at baseline, after ethanol injection, and on the 14th postoperative day (POD). Echocardiographic parameters were evaluated at baseline, on the sixth POD, and on the 13th POD. The animals were then euthanized for histopathological analysis of the hearts. There was a 15.4% mortality rate. We noticed a decrease in pulmonary blood flow right after RV failure (P = 0.0018), as well as during reoperation on the 14th POD (P = 0.002). The induced RV dysfunction caused an increase in venous lactate levels immediately after ethanol injection and on the 14th POD (P < 0.0003). The echocardiogram revealed a decrease in the RV ejection fraction on the sixth and 13th PODs (P = 0.0001). There was an increased RV end-diastolic volume on the sixth (P = 0.0001) and 13th PODs (P = 0.0084). The right ventricle showed a 74% +/- 0.06% transmural infarction area, with necrotic lesions aged 14 days. Intramyocardial ethanol injection has allowed the creation of a reproducible and inexpensive model of RV failure. The hemodynamic, metabolic, and echocardiographic parameters assessed at different protocol times are compatible with severe RV failure. This model may be useful in understanding the pathophysiology of isolated right-sided heart failure, as well as in the assessment of ventricular assist devices.