920 resultados para New Keynesian model, Bayesian methods, Monetary policy, Great Inflation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Central banks in the developed world are being misled into fighting the perceived dangers of a ‘deflationary spiral’ because they are looking at only one indicator: consumer prices. This Policy Brief finds that while consumer prices are flat, broader price indices do not show any sign of impending deflation: the GDP deflator is increasing in the US, Japan and the euro area by about 1.2-1.5%. Nor is the real economy sending any deflationary signals either: unemployment is at record lows in the US and Japan, and is declining in the euro area while GDP growth is at, or above potential. Thus, the overall macroeconomic situation does not give any indication of an imminent deflationary spiral. In today’s high-debt environment, the authors argue that central banks should be looking at the GDP deflator and the growth of nominal GDP, instead of CPI inflation. Nominal GDP growth, as forecasted by the major official institutions, remains robust and is in excess of nominal interest rates. They conclude that if the ECB were to set the interest rate according to the standard rules of thumb for monetary policy, which take into account both the real economy and price developments of broader price indicators, it would start normalising its policy now, instead of pondering over additional measures to fight deflation, which does not exist. In short, economic conditions are slowly normalising; so should monetary policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following adaptation to an oriented (1-d) signal in central vision, the orientation of subsequently viewed test signals may appear repelled away from or attracted towards the adapting orientation. Small angular differences between the adaptor and test yield 'repulsive' shifts, while large angular differences yield 'attractive' shifts. In peripheral vision, however, both small and large angular differences yield repulsive shifts. To account for these tilt after-effects (TAEs), a cascaded model of orientation estimation that is optimized using hierarchical Bayesian methods is proposed. The model accounts for orientation bias through adaptation-induced losses in information that arise because of signal uncertainties and neural constraints placed upon the propagation of visual information. Repulsive (direct) TAEs arise at early stages of visual processing from adaptation of orientation-selective units with peak sensitivity at the orientation of the adaptor (theta). Attractive (indirect) TAEs result from adaptation of second-stage units with peak sensitivity at theta and theta+90 degrees , which arise from an efficient stage of linear compression that pools across the responses of the first-stage orientation-selective units. A spatial orientation vector is estimated from the transformed oriented unit responses. The change from attractive to repulsive TAEs in peripheral vision can be explained by the differing harmonic biases resulting from constraints on signal power (in central vision) versus signal uncertainties in orientation (in peripheral vision). The proposed model is consistent with recent work by computational neuroscientists in supposing that visual bias reflects the adjustment of a rational system in the light of uncertain signals and system constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book examines the international development policies of five East Central European new EU member states, the Czech Republic, Hungary, Poland, Slovakia and Slovenia. These countries turned from being aid recipients to donors after the turn of the millennium in the run-up to EU accession in 2004. The book explains the evolution subsequent to EU accession and current state of foreign aid policies in the region and the reasons why these deviate from many of the internationally agreed best practices in development cooperation. It argues that after the turn of the millennium, a 'Global Consensus' has emerged on how to make foreign aid more effective for development. A comparison between the elements of the Global Consensus and the performance of the five countries reveals that while they have generally implemented little of these recommendations, there are also emerging differences between the countries, with the Czech Republic and Slovenia clearly aspiring to become globally responsible donors. Building on the literatures on foreign policy analysis, international socialization and interest group influence, the book develops a model of foreign aid policy making in order to explain the general reluctance of the five countries in implementing international best practices, and also the differences in their relative performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental methods of policy evaluation are well-established in social policy and development eco-nomics but are rare in industrial and innovation policy. In this paper, we consider the arguments forapplying experimental methods to industrial policy measures, and propose an experimental policy eval-uation approach (which we call RCT+). This approach combines the randomised assignment of firmsto treatment and control groups with a longitudinal data collection strategy incorporating quantitativeand qualitative data (so-called mixed methods). The RCT+ approach is designed to provide a causativerather than purely summative evaluation, i.e. to assess both ‘whether’ and ‘how’ programme outcomesare achieved. In this paper, we assess the RCT+ approach through an evaluation of Creative Credits – aUK business-to-business innovation voucher initiative intended to promote new innovation partnershipsbetween SMEs and creative service providers. The results suggest the potential value of the RCT+ approachto industrial policy evaluation, and the benefits of mixed methods and longitudinal data collection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a paper on the effects of the global financial crisis in Central and Eastern Europe (CEE), the author reacts to a paper of Åslund (2011) published in the same issue of Eurasian Geography and Economics on the influence of exchange rate policies on the region’s recovery. The author argues that post-crisis corrections in current account deficits in CEE countries do not in themselves signal a return to steady economic growth. Disagreeing with Åslund over the role of loose monetary policy in fostering the region’s economic problems, he outlines a number of competitiveness problems that remain to be addressed in the 10 new EU member states of CEE, along with improvements in framework conditions supporting future macroeconomic growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more and more transition coutries join the eurozone it is becoming reasonable to investigate what monetary policy might be most successful for countries prior to the introduction of the euro. One possible alternative is inflation targeting, which has found application in numerous economies in the last two decades, including the Visegrád Countries. In this paper I am introducing some important aspects and an empirical examination of the monetary policy of the Visegrád Countries. I am providing an overview of previous empirical findings and trying to make some comparisons of new EU and recommendations for pre-accession countries, such as Croatia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stereotype threat (Steele & Aronson, 1995) refers to the risk of confirming a negative stereotype about one’s group in a particular performance domain. The theory assumes that performance in the stereotyped domain is most negatively affected when individuals are more highly identified with the domain in question. As federal law has increased the importance of standardized testing at the elementary level, it can be reasonably hypothesized that the standardized test performance of African American children will be depressed when they are aware of negative societal stereotypes about the academic competence of African Americans. This sequential mixed-methods study investigated whether the standardized testing experiences of African American children in an urban elementary school are related to their level of stereotype awareness. The quantitative phase utilized data from 198 African American children at an urban elementary school. Both ex-post facto and experimental designs were employed. Experimental conditions were diagnostic and non-diagnostic testing experiences. The qualitative phase utilized data from a series of six focus group interviews conducted with a purposefully selected group of 4 African American children. The interview data were supplemented with data from 30 hours of classroom observations. Quantitative findings indicated that the stereotype threat condition evoked by diagnostic testing depresses the reading test performance of stereotype-aware African American children (F[1, 194] = 2.21, p < .01). This was particularly true of students who are most highly domain-identified with reading (F[1, 91] = 19.18, p < .01). Moreover, findings indicated that only stereotype-aware African American children who were highly domain-identified were more likely to experience anxiety in the diagnostic condition (F[1, 91] = 5.97, p < .025). Qualitative findings revealed 4 themes regarding how African American children perceive and experience the factors related to stereotype threat: (1) a narrow perception of education as strictly test preparation, (2) feelings of stress and anxiety related to the state test, (3) concern with what “others” think (racial salience), and (4) stereotypes. A new conceptual model for stereotype threat is presented, and future directions including implications for practice and policy are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stereotype threat (Steele & Aronson, 1995) refers to the risk of confirming a negative stereotype about one’s group in a particular performance domain. The theory assumes that performance in the stereotyped domain is most negatively affected when individuals are more highly identified with the domain in question. As federal law has increased the importance of standardized testing at the elementary level, it can be reasonably hypothesized that the standardized test performance of African American children will be depressed when they are aware of negative societal stereotypes about the academic competence of African Americans. This sequential mixed-methods study investigated whether the standardized testing experiences of African American children in an urban elementary school are related to their level of stereotype awareness. The quantitative phase utilized data from 198 African American children at an urban elementary school. Both ex-post facto and experimental designs were employed. Experimental conditions were diagnostic and non-diagnostic testing experiences. The qualitative phase utilized data from a series of six focus group interviews conducted with a purposefully selected group of 4 African American children. The interview data were supplemented with data from 30 hours of classroom observations. Quantitative findings indicated that the stereotype threat condition evoked by diagnostic testing depresses the reading test performance of stereotype-aware African American children (F[1, 194] = 2.21, p < .01). This was particularly true of students who are most highly domain-identified with reading (F[1, 91] = 19.18, p < .01). Moreover, findings indicated that only stereotype-aware African American children who were highly domain-identified were more likely to experience anxiety in the diagnostic condition (F[1, 91] = 5.97, p < .025). Qualitative findings revealed 4 themes regarding how African American children perceive and experience the factors related to stereotype threat: (1) a narrow perception of education as strictly test preparation, (2) feelings of stress and anxiety related to the state test, (3) concern with what “others” think (racial salience), and (4) stereotypes. A new conceptual model for stereotype threat is presented, and future directions including implications for practice and policy are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An abstract of a thesis devoted to using helix-coil models to study unfolded states.\\

Research on polypeptide unfolded states has received much more attention in the last decade or so than it has in the past. Unfolded states are thought to be implicated in various

misfolding diseases and likely play crucial roles in protein folding equilibria and folding rates. Structural characterization of unfolded states has proven to be

much more difficult than the now well established practice of determining the structures of folded proteins. This is largely because many core assumptions underlying

folded structure determination methods are invalid for unfolded states. This has led to a dearth of knowledge concerning the nature of unfolded state conformational

distributions. While many aspects of unfolded state structure are not well known, there does exist a significant body of work stretching back half a century that

has been focused on structural characterization of marginally stable polypeptide systems. This body of work represents an extensive collection of experimental

data and biophysical models associated with describing helix-coil equilibria in polypeptide systems. Much of the work on unfolded states in the last decade has not been devoted

specifically to the improvement of our understanding of helix-coil equilibria, which arguably is the most well characterized of the various conformational equilibria

that likely contribute to unfolded state conformational distributions. This thesis seeks to provide a deeper investigation of helix-coil equilibria using modern

statistical data analysis and biophysical modeling techniques. The studies contained within seek to provide deeper insights and new perspectives on what we presumably

know very well about protein unfolded states. \\

Chapter 1 gives an overview of recent and historical work on studying protein unfolded states. The study of helix-coil equilibria is placed in the context

of the general field of unfolded state research and the basics of helix-coil models are introduced.\\

Chapter 2 introduces the newest incarnation of a sophisticated helix-coil model. State of the art modern statistical techniques are employed to estimate the energies

of various physical interactions that serve to influence helix-coil equilibria. A new Bayesian model selection approach is utilized to test many long-standing

hypotheses concerning the physical nature of the helix-coil transition. Some assumptions made in previous models are shown to be invalid and the new model

exhibits greatly improved predictive performance relative to its predecessor. \\

Chapter 3 introduces a new statistical model that can be used to interpret amide exchange measurements. As amide exchange can serve as a probe for residue-specific

properties of helix-coil ensembles, the new model provides a novel and robust method to use these types of measurements to characterize helix-coil ensembles experimentally

and test the position-specific predictions of helix-coil models. The statistical model is shown to perform exceedingly better than the most commonly used

method for interpreting amide exchange data. The estimates of the model obtained from amide exchange measurements on an example helical peptide

also show a remarkable consistency with the predictions of the helix-coil model. \\

Chapter 4 involves a study of helix-coil ensembles through the enumeration of helix-coil configurations. Aside from providing new insights into helix-coil ensembles,

this chapter also introduces a new method by which helix-coil models can be extended to calculate new types of observables. Future work on this approach could potentially

allow helix-coil models to move into use domains that were previously inaccessible and reserved for other types of unfolded state models that were introduced in chapter 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.