951 resultados para Reduced models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes three tests to determine whether a given nonlinear device noise model is in agreement with accepted thermodynamic principles. These tests are applied to several models. One conclusion is that every Gaussian noise model for any nonlinear device predicts thermodynamically impossible circuit behavior: these models should be abandoned. But the nonlinear shot-noise model predicts thermodynamically acceptable behavior under a constraint derived here. Further, this constraint specifies the current noise amplitude at each operating point from knowledge of the device v - i curve alone. For the Gaussian and shot-noise models, this paper shows how the thermodynamic requirements can be reduced to concise mathematical tests involving no approximatio

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high level of realism and interaction in many computer graphic applications requires techniques for processing complex geometric models. First, we present a method that provides an accurate low-resolution approximation from a multi-chart textured model that guarantees geometric fidelity and correct preservation of the appearance attributes. Then, we introduce a mesh structure called Compact Model that approximates dense triangular meshes while preserving sharp features, allowing adaptive reconstructions and supporting textured models. Next, we design a new space deformation technique called *Cages based on a multi-level system of cages that preserves the smoothness of the mesh between neighbouring cages and is extremely versatile, allowing the use of heterogeneous sets of coordinates and different levels of deformation. Finally, we propose a hybrid method that allows to apply any deformation technique on large models obtaining high quality results with a reduced memory footprint and a high performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Populations on the periphery of a species' range may experience more severe environmental conditions relative to populations closer to the core of the range. As a consequence, peripheral populations may have lower reproductive success or survival, which may affect their persistence. In this study, we examined the influence of environmental conditions on breeding biology and nest survival in a threatened population of Loggerhead Shrikes (Lanius ludovicianus) at the northern limit of the range in southeastern Alberta, Canada, and compared our estimates with those from shrike populations elsewhere in the range. Over the 2-year study in 1992–1993, clutch sizes averaged 6.4 eggs, and most nests were initiated between mid-May and mid-June. Rate of renesting following initial nest failure was 19%, and there were no known cases of double-brooding. Compared with southern populations, rate of renesting was lower and clutch sizes tended to be larger, whereas the length of the nestling and hatchling periods appeared to be similar. Most nest failures were directly associated with nest predators, but weather had a greater direct effect in 1993. Nest survival models indicated higher daily nest survival during warmer temperatures and lower precipitation, which may include direct effects of weather on nestlings as well as indirect effects on predator behavior or food abundance. Daily nest survival varied over the nesting cycle in a curvilinear pattern, with a slight increase through laying, approximately constant survival through incubation, and a decline through the nestling period. Partial brood loss during the nestling stage was high, particularly in 1993, when conditions were cool and wet. Overall, the lower likelihood of renesting, lower nest survival, and higher partial brood loss appeared to depress reproductive output in this population relative to those elsewhere in the range, and may have increased susceptibility to population declines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combination of satellite data, reanalysis products and climate models are combined to monitor changes in water vapour, clear-sky radiative cooling of the atmosphere and precipitation over the period 1979-2006. Climate models are able to simulate observed increases in column integrated water vapour (CWV) with surface temperature (Ts) over the ocean. Changes in the observing system lead to spurious variability in water vapour and clear-sky longwave radiation in reanalysis products. Nevertheless all products considered exhibit a robust increase in clear-sky longwave radiative cooling from the atmosphere to the surface; clear-sky longwave radiative cooling of the atmosphere is found to increase with Ts at the rate of ~4 Wm-2 K-1 over tropical ocean regions of mean descending vertical motion. Precipitation (P) is tightly coupled to atmospheric radiative cooling rates and this implies an increase in P with warming at a slower rate than the observed increases in CWV. Since convective precipitation depends on moisture convergence, the above implies enhanced precipitation over convective regions and reduced precipitation over convectively suppressed regimes. To quantify this response, observed and simulated changes in precipitation rate are analysed separately over regions of mean ascending and descending vertical motion over the tropics. The observed response is found to be substantially larger than the model simulations and climate change projections. It is currently not clear whether this is due to deficiencies in model parametrizations or errors in satellite retrievals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although accuracy of digital elevation models (DEMs) can be quantified and measured in different ways, each is influenced by three main factors: terrain character, sampling strategy and interpolation method. These parameters, and their interaction, are discussed. The generation of DEMs from digitised contours is emphasised because this is the major source of DEMs, particularly within member countries of OEEPE. Such DEMs often exhibit unwelcome artifacts, depending on the interpolation method employed. The origin and magnitude of these effects and how they can be reduced to improve the accuracy of the DEMs are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent emergence of novel pathogenic human and animal coronaviruses has highlighted the need for antiviral therapies that are effective against a spectrum of these viruses. We have used several strains of murine hepatitis virus (MHV) in cell culture and in vivo in mouse models to investigate the antiviral characteristics of peptide-conjugated antisense phosphorodiamidate morpholino oligomers (P-PMOs). Ten P-PMOs directed against various target sites in the viral genome were tested in cell culture, and one of these (5TERM), which was complementary to the 5' terminus of the genomic RNA, was effective against six strains of MHV. Further studies were carried out with various arginine-rich peptides conjugated to the 5TERM PMO sequence in order to evaluate efficacy and toxicity and thereby select candidates for in vivo testing. In uninfected mice, prolonged P-PMO treatment did not result in weight loss or detectable histopathologic changes. 5TERM P-PMO treatment reduced viral titers in target organs and protected mice against virus-induced tissue damage. Prophylactic 5TERM P-PMO treatment decreased the amount of weight loss associated with infection under most experimental conditions. Treatment also prolonged survival in two lethal challenge models. In some cases of high-dose viral inoculation followed by delayed treatment, 5TERM P-PMO treatment was not protective and increased morbidity in the treated group, suggesting that P-PMO may cause toxic effects in diseased mice that were not apparent in the uninfected animals. However, the strong antiviral effect observed suggests that with further development, P-PMO may provide an effective therapeutic approach against a broad range of coronavirus infections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing , the average energy transferred in a downward collision, to an unusually large value (> 5000 cm(-1)). The discrepancy could also be reduced by making all overall rotations fully active. The system was relatively insensitive to changing the moments of inertia in the transition state to increase the centrifugal effect. The possibility of involvement of BrOONO was tested and cannot account for the difficulties of fitting the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors investigated whether heart rate (HR) responses to voluntary recall of trauma memories (a) are related to posttraumatic stress disorder (PTSD), and (b) predict recovery 6 months later. Sixty-two assault survivors completed a recall task modeled on imaginal reliving in the initial weeks postassault. Possible cognitive modulators of HR responsivity were assessed; dissociation, rumination, trauma memory disorganization. Individuals with PTSD showed a reduced HR response to reliving compared to those without PTSD, but reported greater distress. Notably, higher HR response but not self-reported distress during reliving predicted greater symptom reduction at follow-up in participants with PTSD. Engagement in rumination was the only cognitive factor that predicted lower HR response. The data are in contrast to studies using trauma reminders to trigger memories, which have found greater physiological reactivity in PTSD. The authors' observations are consistent with models of PTSD that highlight differences between cued or stimulus-driven retrieval and intentional trauma recall, and with E B. Foa and M.J. Kozak (1986) hypothesis that full activation of trauma memories facilitates emotional processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two quantum-kinetic models of ultrafast electron transport in quantum wires are derived from the generalized electron-phonon Wigner equation. The various assumptions and approximations allowing one to find closed equations for the reduced electron Wigner function are discussed with an emphasis on their physical relevance. The models correspond to the Levinson and Barker-Ferry equations, now generalized to account for a space-dependent evolution. They are applied to study the quantum effects in the dynamics of an initial packet of highly nonequilibrium carriers, locally generated in the wire. The properties of the two model equations are compared and analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aquaplanet model is used to study the nature of the highly persistent low-frequency waves that have been observed in models forced by zonally symmetric boundary conditions. Using the Hayashi spectral analysis of the extratropical waves, the authors find that a quasi-stationary wave 5 belongs to a wave packet obeying a well-defined dispersion relation with eastward group velocity. The components of the dispersion relation with k ≥ 5 baroclinically convert eddy available potential energy into eddy kinetic energy, whereas those with k < 5 are baroclinically neutral. In agreement with Green’s model of baroclinic instability, wave 5 is weakly unstable, and the inverse energy cascade, which had been previously proposed as a main forcing for this type of wave, only acts as a positive feedback on its predominantly baroclinic energetics. The quasi-stationary wave is reinforced by a phase lock to an analogous pattern in the tropical convection, which provides further amplification to the wave. It is also found that the Pedlosky bounds on the phase speed of unstable waves provide guidance in explaining the latitudinal structure of the energy conversion, which is shown to be more enhanced where the zonal westerly surface wind is weaker. The wave’s energy is then trapped in the waveguide created by the upper tropospheric jet stream. In agreement with Green’s theory, as the equator-to-pole SST difference is reduced, the stationary marginally stable component shifts toward higher wavenumbers, while wave 5 becomes neutral and westward propagating. Some properties of the aquaplanet quasi-stationary waves are found to be in interesting agreement with a low frequency wave observed by Salby during December–February in the Southern Hemisphere so that this perspective on low frequency variability, apart from its value in terms of basic geophysical fluid dynamics, might be of specific interest for studying the earth’s atmosphere.