29 resultados para Overshooting
Resumo:
Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer) and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale) campaign, which took place in Bauru, Brazil (22° S, 49° W), from the end of January to early March 2004. Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air. Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection) in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista) radar (S-band type at 2.8 Ghz) to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization) tropopause in this region lies at ∼ 15.9 km, this is likely to represent direct stratospheric moistening. Much more moistening is predicted for the 15-16 km height range with increases of 0.85-2.8 ppmv predicted. However, it would be required that this air is lofted through the tropopause via the Brewer Dobson circulation in order for it to have a stratospheric effect. Whether this is likely is uncertain and, in addition, the dehydration of air as it passes through the cold trap and the number of times that trajectories sample convective regions needs to be taken into account to gauge the overall stratospheric effect. Nevertheless, the results suggest a potentially significant role for convection in determining the stratospheric water content. Sensitivity tests exploring the impact of increased aerosol numbers in the boundary layer suggest that a corresponding rise in cloud droplet numbers at cloud base would increase the number concentrations of the ice crystals transported to the TTL, which had the effect of reducing the fall speeds of the ice and causing a ∼13% rise in the mean vapour increase in both the 15-16 and 16-17 km height ranges, respectively, when compared to the control case. Increases in the total water were much larger, being 34% and 132% higher for the same height ranges, but it is unclear whether the extra ice will be able to evaporate before precipitating from the region. These results suggest a possible impact of natural and anthropogenic aerosols on how convective clouds affect stratospheric moisture levels.
Resumo:
We develop an open economy macroeconomic model with real capital accumulation and microeconomic foundations. We show that expansionary monetary policy causes exchange rate overshooting, not once, but potentially twice; the secondary repercussion comes through the reaction of firms to changed asset prices and the firms' decisions to invest in real capital. The model sheds further light on the volatility of real and nominal exchange rates, and it suggests that changes in corporate sector profitability may affect exchange rates through international portfolio diversification in corporate securities.
Resumo:
Olfactory marker protein (OMP) is expressed by mature primary olfactory sensory neurons during development and in adult mice. In mice that lack OMP, olfactory sensory neurons have perturbed electrophysiological activity, and the mice exhibit altered responses and behavior to odor stimulation. To date, defects in axon guidance in mice that lack OMP have not been investigated. During development of the olfactory system in mouse, primary olfactory axons often overshoot their target glomerular layer and project into the deeper external plexiform layer. These aberrant axonal projections are normally detected within the external plexiform layer up to postnatal day 12. We have examined the projections of primary olfactory axons in OMP-tau:LacZ mice and OMP-GFP mice, two independent lines in which the OMP coding region has been replaced by reporter molecules. We found that axons overshoot their target layer and grow into the external plexiform layer in these OMP null mice as they do in wild-type animals. However, in the absence of OMP, overshooting axons are more persistent and remain prominent until 5 weeks postnatally, after which their numbers decrease. Overshooting axons are still present in these mice even at 8 months of age. In heterozygous mice, axons also overshoot into the external plexiform layer; however, there are fewer axons, and they project for shorter distances, compared with those in a homozygous environment. Our results suggest that perturbed electrophysiological responses, caused by loss of OMP in primary olfactory neurons, reduce the ability of primary olfactory axons to recognize their glomerular target. © 2005 Wiley-Liss, Inc.
Resumo:
This paper provides an analysis of why many ‘stars’ tend to fade away rather than enjoying ongoing branding advantages from their reputations. We propose a theory of market overshooting in creative industries that is based on Schumpeterian competition between producers to maintain the interest of boundedly rational fans. As creative producers compete by offering further artistic novelty, this escalation of product complexity eventually leads to overshooting. We propose this as a theory of endogenous cycles in the creative industries.
Resumo:
The Fraunhoffer diffraction analysis of cloud-covered satellite imagery has shown that the diffraction pattern follows approximately cosine squared distribution. The overshooting tops of clouds and the shadows cast by them contribute much to the diffraction of light, particularly in the high-frequency range. Indeed, cloud-covered imagery can be distinguished from cloud-free imagery on the basis of rate of decay of the diffracted light power in the high-frequency band.
Resumo:
A circular system is employed in this paper to investigate the swelling behaviors of polyampholyte hydrogels; this circular system can effectively eliminate the disturbance of various factors and keep the surrounding environment constant. It is found that there exists a spontaneous volume transition to the collapsed state of polyampholyte hydrogels, which is attributed to the overshooting effect, and the transition can occur repeatedly under certain conditions. C-13 NMR is employed to investigate the swelling behavior of polyampholyte hydrogels.
Resumo:
Rotation has become an important element in evolutionary models of massive stars, specifically via the prediction of rotational mixing. Here we study a sample of stars, including rapid rotators, to constrain such models and use nitrogen enrichments as a probe of the mixing process. Chemical compositions (C, N, O, Mg, and Si) have been estimated for 135 early B-type stars in the Large Magellanic Cloud with projected rotational velocities up to similar to 300 km s(-1) using a non-LTE TLUSTY model atmosphere grid. Evolutionary models, including rotational mixing, have been generated attempting to reproduce these observations by adjusting the overshooting and rotational mixing parameters and produce reasonable agreement with 60% of our core hydrogen burning sample. We find (excluding known binaries) a significant population of highly nitrogen-enriched intrinsic slow rotators (nu sin i less than or similar to 50 km s(-1)) incompatible with our models (similar to 20% of the sample). Furthermore, while we find fast rotators with enrichments in agreement with the models, the observation of evolved (dex) fast rotators (log g < 3.7 dex) that are relatively unenriched (a further similar to 20% of the sample) challenges the concept of rotational mixing. We also find that 70% of our blue supergiant sample cannot have evolved directly from the hydrogen-burning main sequence. We are left with a picture where invoking binarity and perhaps fossil magnetic fields is required to understand the surface properties of a population of massive main- sequence stars.
Resumo:
Since the financial crash of 2008 monetary policy has been in a state of stasis – a condition in which things are not changing, moving, or progressing, but rather appear frozen. Interest rates have been frozen at low levels for a considerable period time. Inflation targets have consistently been missed, through phases of both overshooting and undershooting. At the same time, a variety of unconventional monetary policies involving asset purchases and liquidity provision have been pursued. Questions have been raised from a variety of sources, including various international organizations, covering distinct BIS and IMF positions about the continuing validity and sustainability of existing monetary policy frameworks, not least because inflation targeting has ceased to act as reliable guide for policy for over six years. Despite this central banks have been reluctant to debate moving to a new formal policy framework. This article argues that as an apex policy forum only the G20 leaders’ summits has the necessary political authority to call their central banks to account and initiate a wide ranging debate on the future of monetary policy. A case is made for convening a monetary policy working group to discuss a range of positions, including those of the BIS and IMF, and to make recommendations, because the G20 has been most effective in displaying international financial leadership, when leaders have convened and made use of specialist working groups.
Resumo:
This paper studies a dynamic-optimizing model of a semi-small open economy with sticky nominal prices and wages. the model exhibits exchange rate overshooting in response to money supply shocks. the predicted variability of nominal and real exchange rates is roughly consistent with that of G7 effective exchange rates during the post-Bretton Woods era.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
A time series of the observed transport through an array of moorings across the Mozambique Channel is compared with that of six model runs with ocean general circulation models. In the observations, the seasonal cycle cannot be distinguished from red noise, while this cycle is dominant in the transport of the numerical models. It is found, however, that the seasonal cycles of the observations and numerical models are similar in strength and phase. These cycles have an amplitude of 5 Sv and a maximum in September, and can be explained by the yearly variation of the wind forcing. The seasonal cycle in the models is dominant because the spectral density at other frequencies is underrepresented. Main deviations from the observations are found at depths shallower than 1500 m and in the 5/y–6/y frequency range. Nevertheless, the structure of eddies in the models is close to the observed eddy structure. The discrepancy is found to be related to the formation mechanism and the formation position of the eddies. In the observations, eddies are frequently formed from an overshooting current near the mooring section, as proposed by Ridderinkhof and de Ruijter (2003) and Harlander et al. (2009). This causes an alternation of events at the mooring section, varying between a strong southward current, and the formation and passing of an eddy. This results in a large variation of transport in the frequency range of 5/y–6/y. In the models, the eddies are formed further north and propagate through the section. No alternation similar to the observations is observed, resulting in a more constant transport.
Resumo:
One reason for the recent asset price bubbles in many developed countries could be regulatory capital arbitrage. Regulatory and legal changes can help traditional banks to move their assets off their balance sheets into the lightly regulated shadows and thus enable regulatory arbitrage through the securitized sector. This paper adopts a global vector autoregression (GVAR) methodology to assess the effects of regulatory capital arbitrage on equity prices, house prices and economic activity across 11 OECD countries/ regions. A counterfactual experiment disentangles the effects of regulatory arbitrage following a change in the net capital rule for investment banks in April 2004 and the adoption of the Basel II Accord in June 2004. The results provide evidence for the existence of an international finance multiplier, with about half of the countries overshooting U.S. impulse responses. The counterfactual shows that regulatory arbitrage via the U.S. securitized sector may enhance the cross-country reallocation of capital from housing markets towards equity markets.
Resumo:
Colour-magnitude diagrams (CMDs) of the Small Magellanic Cloud (SMC) star cluster NGC 419, derived from Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) data, reveal a well-delineated secondary clump located below the classical compact red clump typical of intermediate-age populations. We demonstrate that this feature belongs to the cluster itself, rather than to the underlying SMC field. Then, we use synthetic CMDs to show that it corresponds very well to the secondary clump predicted to appear as a result of He-ignition in stars just massive enough to avoid e(-)-degeneracy settling in their H-exhausted cores. The main red clump instead is made of the slightly less massive stars which passed through e(-) degeneracy and ignited He at the tip of the red giant branch. In other words, NGC 419 is the rare snapshot of a cluster while undergoing the fast transition from classical to degenerate H-exhausted cores. At this particular moment of a cluster`s life, the colour distance between the main-sequence turn-off and the red clump(s) depends sensitively on the amount of convective core overshooting, Lambda(c). By coupling measurements of this colour separation with fits to the red clump morphology, we are able to estimate simultaneously the cluster mean age (1.35(-0.04)(+0.11) Gyr) and overshooting efficiency (Lambda(c) = 0.47(-0.04)(+0.14)). Therefore, clusters like NGC 419 may constitute important marks in the age scale of intermediate-age populations. After eye inspection of other CMDs derived from HST/ACS data, we suggest that the same secondary clump may also be present in the Large Magellanic Cloud clusters NGC 1751, 1783, 1806, 1846, 1852 and 1917.
Resumo:
O trabalho investiga o ajustamento da taxa de câmbio na transição de um regime de câmbio fixo com taxa de câmbio real apreciada para um regime flutuante. Pretendemos argumentar, teórica e empiricamente, que a depreciação da taxa de câmbio, bem acima da apreciação acumulada no período, que se observou nos diversos países que passaram por esta mudança de regime, é esperada e não se confunde com a análise de overshooting de Dornbusch. Em linhas bastante gerais nosso argumento é que esta depreciação excessiva pode ser o mecanismo de correção do crescimento da dívida externa, que durante o período de apreciação cambial esteve acima de sua taxa de estado estacionário. A intensidade e duração deste ajuste depende, entre outras coisas, da possibilidade de novos empréstimos, da taxa de juros paga sobre os mesmos e da resposta da balança comercial à taxa de câmbio.
Resumo:
The EU HIBISCUS project consisted of a series of field campaigns during the intense convective summers in 2001, 2003 and 2004 in the State of São Paulo in Brazil. Its objective was to investigate the impact of deep convection on the Tropical Tropopause Layer (TTL) and the lower stratosphere by providing a new set of observational data on meteorology, tracers of horizontal and vertical transport, water vapour, clouds, and chemistry in the tropical Upper Troposphere/Lower Stratosphere (UT/LS). This was achieved using short duration research balloons to study local phenomena associated with convection over land, and long-duration balloons circumnavigating the globe to study the contrast between land and oceans.Analyses of observations of short-lived tracers, ozone and ice particles show strong episodic local updraughts of cold air across the lapse rate tropopause up to 18 or 19 km (420-440 K) in the lower stratosphere by overshooting towers. The long duration balloon and satellite measurements reveal a contrast between the composition of the lower stratosphere over land and oceanic areas, suggesting significant global impact of such events. The overshoots are shown to be well captured by non-hydrostatic meso-scale Cloud Resolving Models indicating vertical velocities of 50-60 m s(-1) at the top of the Neutral Buoyancy Level (NBL) at around 14 km, but, in contrast, are poorly represented by global Chemistry-Transport Models (CTM) forced by Numerical Weather Forecast Models (NWP) underestimating the overshooting process. Finally, the data collected by the HIBISCUS balloons have allowed a thorough evaluation of temperature NWP analyses and reanalyses, as well as satellite ozone, nitrogen oxide, water vapour and bromine oxide measurements in the tropics.