920 resultados para New Keynesian model, Bayesian methods, Monetary policy, Great Inflation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the theoretical conditions for effectiveness of government consumptionexpenditure expansions using US, Euro area and UK data. Fiscal expansions taking placewhen monetary policy is accommodative lead to large output multipliers in normal times.The 2009-2010 packages need not produce significant output multipliers, may havemoderate debt effects, and only generate temporary inflation. Expenditure expansionsaccompanied by deficit/debt consolidations schemes may lead to short run output gains buttheir success depends on how monetary policy and expectations behave. Trade opennessand the cyclicality of the labor wedge explain cross-country differences in the magnitude ofthe multipliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a simple and general model for computing the Ramsey optimal inflation tax, which includes several models from the previous literature as special cases. We show that it cannot be claimed that the Friedman rule is always optimal (or always non--optimal) on theoretical grounds. The Friedman rule is optimal or not, depending on conditions related to the shape of various relevant functions. One contribution of this paper is to relate these conditions to {\it measurable} variables such as the interest rate or the consumption elasticity of money demand. We find that it tends to be optimal to tax money when there are economies of scale in the demand for money (the scale elasticity is smaller than one) and/or when money is required for the payment of consumption or wage taxes. We find that it tends to be optimal to tax money more heavily when the interest elasticity of money demand is small. We present empirical evidence on the parameters that determine the optimal inflation tax. Calibrating the model to a variety of empirical studies yields a optimal nominal interest rate of less than 1\%/year, although that finding is sensitive to the calibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the transmission mechanisms of monetarypolicy in a general equilibrium model of securities marketsand banking with asymmetric information. Banks' optimal asset/liability policy is such that in equilibrium capital adequacy constraints are always binding. Asymmetric information about banks' net worth adds a cost to outside equity capital, which limits the extent to which banks can relax their capital constraint. In this context monetarypolicy does not affect bank lending through changes in bank liquidity. Rather, it has the effect of changing theaggregate composition of financing by firms. The model also produces multiple equilibria, one of which displays all the features of a "credit crunch". Thus, monetary policy can also have large effects when it induces a shift from one equilibrium to the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Was the German slump inevitable? This paper argues that -despite thespeed and depth of Germany's deflation in the early 1930s - fear ofinflation is evident in the bond, foreign exchange, and commodity marketsat certain critical junctures of the Great Depression. Therefore, policyoptions were more limited than many subsequent critics of Brüning'spolicies have been prepared to admit. Using a rational expectationsframework, we find strong evidence from the bondmarket to suggest fearof inflation. Futures prices also reveal that market participants werebetting on price increases. These findings are discussed in the contextof reparations and related to the need for a regime shift to overcomethe crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Coarctation of the aorta is a common congenital heart malformation. Mode of diagnosis changed from clinically to almost exclusively by echocardiogram and MRI. We claim to find a new echocardiographic index, based on simple and reliable morphologic measurements, to facilitate the diagnosis of aortic coarctation in the newborn.We reproduce the same procedure for older child to validate this new index. Material and Methods: We reviewed echocardiographic studies of 47 neonates with diagnosis of coarctation who underwent cardiac surgery between January 1997 and February 2003 and compared them with a matched control group. We measured 12 different sites of the aorta, aortic arch and the great vessels on the echocardiographic bands. In a second time we reviewed 23 infants for the same measurements and compare them with a matched control group. Results: 47 neonates with coarctation were analysed, age 11.8 _ 10 days,weight 3.0 _ 0.6 kg, body surface 0.20 _ 0.02m2. The control group was of 16 newborns aged 15.8 _ 10 days,weight 3.2 _ 0.9 kg and body surface 0.20 _ 0.04m2. A significant difference was noted in many morphologic measurement between the both groups, the most significant being the distance between the left carotid artery and the left subclavian artery (coarctation vs control: 7.3 _ 3mm vs 2.4 _ 0.8mm, p _ 0.0001). We then defined a new index, the carotid-subclavian arteries index (CSI) as the diameter of the distal tranverse aortic arch divided to the distance left carotid artery to left subclavian artery being also significaly different (coarctation vs control: 0.76 _ 0.86 vs 2.95 _ 1.24, p _ 0.0001). With the cutoff value of this index of 1.5 the sensitivity for aortic coarctation was 98% and the specificity of 92%. In an older group of infant with coarctation (16 patients) we apply the same principle and find for a cut-off value of 1.5 a sensitivity of 95% and a specificity of 100%. Conclusions: The CSI allows to evaluate newborns and infants for aortic coarctation with simple morphologic measurement that are not depending of the left ventricular function, presence of a patent ductus arteriosus or not. Further aggressive evaluation of these patient with a CSI _ 1.5 is indicated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Viimeisien vuosikymmenien aikana teknologinen kehitys on muuttanut yritysten toimintaympäristöä suuresti. Yritysten käyttöön on tullut lukuisia teorioita ja menetelmiä, joilla yrityksen suorituskykyä voidaan mitata, kehittää ja vertailla. Näin myös erilaisten johtamisen menetelmien ja käytäntöjen rooli on korostunut yritysten johtamisjärjestelmissä. Myös yritysten suorituskykyä on tutkittu monella eri tavoin viimeisien vuosien aikana. Näiden tutkimusten tavoitteena on ollut löytää vertailtavuutta globaalien toimijoiden välillä. Tämä tutkimus lähestyy suorituskykyä aivan uudenlaisesta näkökulmasta ja tarkastelee tuloksia vertailuoppimisen, osaamisen ja tiedon johtamisen näkökulmasta. Oleellista on ollut luoda uusi näkökulma aikaisempiin tutkimuksiin, jotka ovat keskittyneet vertailemaan yrityksien operatiivista ja liiketoiminnan suorituskykyä sekä sitä, kuinka hyvin ne ovat pystyneet hyödyntämään parhaita käytäntöjä toiminnassaan. Tutkimuksen teoreettisen tarkastelun tavoite on selkeyttää koko strategisen johtamisen mallia. Tutkimus lähti liikkeelle analysoimalla aikaisemmista yritysten suorituskykyyn tehtyjä tutkimuksia. Tästä tehtiin havainto, että osaamisen johtamista ja kehittämistä on tutkittu varsin vähän yritysten suorituskyvyn kehittymiseen liittyen. Luotiin teoreettinen viitekehys, joka avulla henkilöstön osaaminen liitettiin osaksi strategian toteutumisen prosessia. Tämän tutkimuksen tulokset ja johtopäätökset perustuvat vuosina 1993 ja 2003 Made in Finland tutkimuksessa kerättyyn yritysten suorituskykyä tarkastelevaan aineistoon. Se perustuu 23 yrityksen haastatteluihin. Aineistosta valittiin kaikki ne tulokset, joilla katsottiin olevan merkitystä osaamisen johtamisen ja kehittämisen kannalta. Tutkittujen yritysten suorituskyvyssä oli kymmenen vuoden seurantajakson aikana tapahtunut selvää kehittymistä. Työn joustavuus ja työntekijöiden sitoutuminen on lisääntynyt ja vertailuoppimista on hyödynnetty. Tuotannon prosesseja on kehitetty ja näin uusien tuotteiden markkinoille tuloaikaa on saatu nopeutetuksi. Merkillepantavaa on, että satsaukset koulutukseen eivät ole lisääntyneet vaan osissa tutkituissa yrityksissä satsaukset ovat jopa vähentyneet. Tulokset tavoitteisiin nähden ovat hyvin ristiriitaisia. Toiminnan tehostumista odotetaan kaikilla osa-alueilla. Kuitenkin osaamiseen ei olla valmiita satsaamaan, eikä yritysten johdossa nähdä osaamisen kehittämistä tärkeänä tavoitteiden saavuttamiseksi. Suurimpana esteenä tavoitteen saavuttamiseksi yritysten johto näkee pätevän henkilökunnan saatavuuden. Vaikka moni yrityksistä käyttää tasapainotettua mittaristoa strategisen johtamisen työkaluna, ei sen kaikkia osa-alueita ja niiden tasapainon merkitystä ole selvästi ymmärretty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kilpailluilla markkinoilla menestyäkseen yrityksen tulee tuntea oman tuotannon kustannusrakenne. Yrityksen täytyy menestyäkseen tarjota asiakkaalle yksilöllisiä tuotteita kilpailukykyiseen hintaan. Tämä aiheuttaa haasteita tuotannolle, mutta myös tuotekustannuslaskennalle; perinteisin tuotekustannuslaskentamenetelmin ei enää pystytä huomioimaan laajan tuotevalikoiman tuotteiden valmistuksen eroavaisuuksia. Tämän diplomityön tarkoituksena on selvittää Fenestra Oy:n tuotekustannuslaskentakäytäntöjen soveltuvuus nykyhetkeen. Vertailukohdaksi työssä määritellään vaihtoehtoinen laskentamalli tuotekustannusten laskemiseksi. Uuden laskentamallin soveltuvuutta käytäntöön tutkitaan esimerkkilaskelmin, joiden tuloksia verrataan Fenestra Oy:n nykyisen laskennan tuloksiin. Laskentamallilla aikaansaadut tulokset poikkeavat Fenestra Oy:n nykyisestä laskennasta ennakoidulla tavalla; perinteisin menetelmin valmistukseltaan yksinkertaisille tuotteille kohdistuu liikaa kustannuksia ja monimutkaisille liian vähän. Laadittu laskentamalli on diplomityön laajuuden puitteessa varsin yksinkertainen, mutta suuntaa-antava. Jatkokehityskohteena laskennan kehittämiseksi tuotannosta tulisi alkaa kerätä tarkemmin tietoa ja materiaalikäytön osalta tarkkaa hukkaselvitystä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I reconsider the short-term effects of fiscal policy when both government spending and taxes are allowed to respond to the level of public debt. I embed the long-term government budget constraint in a VAR, and apply this common trends model to US quarterly data. The results overturn some widely held beliefs on fiscal policy effects. The main finding is that expansionary fiscal policy has contractionary effects on output and inflation. Ricardian effects may dominate when fiscal expansions are expected to be adjusted by future tax rises or spending cuts. The evidence supports RBC models with distortionary taxation. We can discard some alternative interpretations that are based on monetary policy reactions or supply-side effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The legislative reforms in university matters driven in recent years, beyond the provoked controversies, offer to universities the possibility to develop a new model in line with the European environment, focusing on quality aims and adapting to the socioeconomic current challenges. A new educational model centered on the student, on the formation of specific and transverse competitions, on the improvement of the employability and the access to the labor market, on the attraction and fixation of talent, is an indispensable condition for the effective social mobility and for the homogeneous development of a more responsible and sustainable socioeconomic and productive model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kokoelma sisältää euroalueen keskuspankkien ja muiden ulkomaisten keskuspankkien julkaisemia tilastojulkaisuja, vuosikertomuksia ja lehtiä rahapolitiikasta, rahoitusmarkkinoista, finanssivalvonnasta, rahahuollosta ja pankkitoiminnasta. Julkaisuissa on tilastotietoa kyseisistä maista ja katsauksia taloudelliseen tilanteeseen ja rahapoliittisiin toimenpiteisiin. Tilastot ovat yleensä maan omassa valuutassa. Julkaisuja on n. 3890 nimekettä 1800 -luvulta lähtien. Kokoelman julkaisut ovat enimmäkseen englanninkielisiä, mutta julkaisuja on myös kansallisilla kielillä. Keskuspankkikokoelma kasvaa painetuilla vuosikertomuksilla, tilastoilla ja raporteilla. Osa tilastojulkaisuista sekä lehdistä on verkkojulkaisuina keskuspankkien sivuilla. Kokoelman vanhin aineisto on Ruotsin ja Englannin keskuspankkien lainsäädäntöä 1800-luvun puolivälistä. Euroalueelta kokoelmassa on Alankomaiden, Belgian, Espanjan, Irlannin, Italian, Itävallan, Kreikan, Kyproksen, Luxemburgin, Maltan, Portugalin, Ranskan, Saksan, Slovakian, Slovenian, ja Viron keskuspankkien julkaisuja. Suomen Pankin julkaisut on käsitelty omana alueenaan. Kokoelmassa on myös Argentiinan, Australian, Brasilian, Bulgarian, Intian, Islannin, Ison-Britannian, Israelin, Japanin, entisen Jugoslavian, Kanadan, Kiinan, Latvian, Liettuan, Puolan, Romanian, Ruotsin, Sveitsin, Tanskan, Tsekin, entisen Tsekkoslovakian, Turkin, Ukrainan, Unkarin, Uuden Seelannin, Venäjän ja Yhdysvaltojen keskuspankkien julkaisuja. Painettua kokoelmaa säilytetään varastokokoelmassa, ja se on saatavana pyynnöstä käyttöön ja kopioitavaksi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to draw up a clear construction of an anticipatory communicative decision-making process and a successful implementation of a Bayesian application that can be used as an anticipatory communicative decision-making support system. This study is a decision-oriented and constructive research project, and it includes examples of simulated situations. As a basis for further methodological discussion about different approaches to management research, in this research, a decision-oriented approach is used, which is based on mathematics and logic, and it is intended to develop problem solving methods. The approach is theoretical and characteristic of normative management science research. Also, the approach of this study is constructive. An essential part of the constructive approach is to tie the problem to its solution with theoretical knowledge. Firstly, the basic definitions and behaviours of an anticipatory management and managerial communication are provided. These descriptions include discussions of the research environment and formed management processes. These issues define and explain the background to further research. Secondly, it is processed to managerial communication and anticipatory decision-making based on preparation, problem solution, and solution search, which are also related to risk management analysis. After that, a solution to the decision-making support application is formed, using four different Bayesian methods, as follows: the Bayesian network, the influence diagram, the qualitative probabilistic network, and the time critical dynamic network. The purpose of the discussion is not to discuss different theories but to explain the theories which are being implemented. Finally, an application of Bayesian networks to the research problem is presented. The usefulness of the prepared model in examining a problem and the represented results of research is shown. The theoretical contribution includes definitions and a model of anticipatory decision-making. The main theoretical contribution of this study has been to develop a process for anticipatory decision-making that includes management with communication, problem-solving, and the improvement of knowledge. The practical contribution includes a Bayesian Decision Support Model, which is based on Bayesian influenced diagrams. The main contributions of this research are two developed processes, one for anticipatory decision-making, and the other to produce a model of a Bayesian network for anticipatory decision-making. In summary, this research contributes to decision-making support by being one of the few publicly available academic descriptions of the anticipatory decision support system, by representing a Bayesian model that is grounded on firm theoretical discussion, by publishing algorithms suitable for decision-making support, and by defining the idea of anticipatory decision-making for a parallel version. Finally, according to the results of research, an analysis of anticipatory management for planned decision-making is presented, which is based on observation of environment, analysis of weak signals, and alternatives to creative problem solving and communication.