898 resultados para Non-conventional models of career
Resumo:
Background: Pain markedly activates the hypothalamic-pituitary-adrenal (HPA) axis and increases plasma corticosterone release interfering significantly with nociceptive behaviour as well as the mechanism of action of analgesic drugs. Aims/Methods: In the present study, we monitored the time course of circulating corticosterone in two mouse strains (C57Bl/6 and Balb/C) under different pain models. In addition, the stress response was investigated following animal handling, intrathecal (i.t.) manipulation and habituation to environmental conditions commonly used in nociceptive experimental assays. We also examined the influence of within-cage order of testing on plasma corticosterone. Results: Subcutaneous injection of capsaicin precipitated a prompt stress response whereas carrageenan and complete Freund's adjuvant induced an increased corticosterone release around the third hour post-injection. However, carrageenan induced a longer increased corticosterone in C57Bl/6 mice. In partial sciatic nerve ligation, neuropathic pain model corticosterone increased only in the first days whereas mechanical hypersensitivity remained much longer. Animal handling also represents an important stressor whereas the i.t. injection per se does not exacerbate the handling-induced stress response. Moreover, the order of testing animals from the same cage does not interfere with plasma corticosterone levels in the intrathecal procedure. Animal habituation to the testing apparatus also does not reduce the immediate corticosterone increase as compared with non-habituated mice. Conclusion: Our data indicate that HPA axis activation in acute and chronic pain models is time dependent and may be dissociated from evoked hyperalgesia. Therefore, HPA-axis activation represents an important variable to be considered when designing experimental assays of persistent pain as well as for interpretation of data.
Resumo:
Ocular inflammation is one of the leading causes of blindness and loss of vision. Human uveitis is a complex and heterogeneous group of diseases characterized by inflammation of intraocular tissues. The eye may be the only organ involved, or uveitis may be part of a systemic disease. A significant number of cases are of unknown etiology and are labeled idiopathic. Animal models have been developed to the study of the physiopathogenesis of autoimmune uveitis due to the difficulty in obtaining human eye inflamed tissues for experiments. Most of those models are induced by injection of specific photoreceptors proteins (e.g., S-antigen, interphotoreceptor retinoid-binding protein, rhodopsin, recoverin, phosducin). Non-retinal antigens, including melanin-associated proteins and myelin basic protein, are also good inducers of uveitis in animals. Understanding the basic mechanisms and pathogenesis of autoimmune ocular diseases are essential for the development of new treatment approaches and therapeutic agents. The present review describes the main experimental models of autoimmune ocular inflammatory diseases.
Testing phenomenological and theoretical models of dark matter density profiles with galaxy clusters
Resumo:
We use the stacked gravitational lensingmass profile of four high-mass (M 1015M ) galaxy clusters around z≈0.3 from Umetsu et al. to fit density profiles of phenomenological [Navarro– Frenk–White (NFW), Einasto, S´ersic, Stadel, Baltz–Marshall–Oguri (BMO) and Hernquist] and theoretical (non-singular Isothermal Sphere, DARKexp and Kang & He) models of the dark matter distribution. We account for large-scale structure effects, including a two-halo term in the analysis.We find that the BMO model provides the best fit to the data as measured by the reduced χ2. It is followed by the Stadel profile, the generalized NFW profile with a free inner slope and by the Einasto profile. The NFW model provides the best fit if we neglect the two-halo term, in agreement with results from Umetsu et al. Among the theoretical profiles, the DARKexp model with a single form parameter has the best performance, very close to that of the BMO profile. This may indicate a connection between this theoretical model and the phenomenology of dark matter haloes, shedding light on the dynamical basis of empirical profiles which emerge from numerical simulations.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.
Resumo:
A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.
Resumo:
Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.
Resumo:
To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.
Resumo:
Cold-formed steel (CFS) combined with wood sheathing, such as oriented strand board (OSB), forms shear walls that can provide lateral resistance to seismic forces. The ability to accurately predict building deformations in damaged states under seismic excitations is a must for modern performance-based seismic design. However, few static or dynamic tests have been conducted on the non-linear behavior of CFS shear walls. Thus, the purpose of this research work is to provide and demonstrate a fastener-based computational model of CFS wall models that incorporates essential nonlinearities that may eventually lead to improvement of the current seismic design requirements. The approach is based on the understanding that complex interaction of the fasteners with the sheathing is an important factor in the non-linear behavior of the shear wall. The computational model consists of beam-column elements for the CFS framing and a rigid diaphragm for the sheathing. The framing and sheathing are connected with non-linear zero-length fastener elements to capture the OSB sheathing damage surrounding the fastener area. Employing computational programs such as OpenSees and MATLAB, 4 ft. x 9 ft., 8 ft. x 9 ft. and 12 ft. x 9 ft. shear wall models are created, and monotonic lateral forces are applied to the computer models. The output data are then compared and analyzed with the available results of physical testing. The results indicate that the OpenSees model can accurately capture the initial stiffness, strength and non-linear behavior of the shear walls.
Resumo:
BACKGROUND: Cyclic recruitment during mechanical ventilation contributes to ventilator associated lung injury. Two different pathomechanisms in acute respiratory distress syndrome (ARDS) are currently discussed: alveolar collapse vs persistent flooding of small airways and alveoli. We compare two different ARDS animal models by computed tomography (CT) to describe different recruitment and derecruitment mechanisms at different airway pressures: (i) lavage-ARDS, favouring alveolar collapse by surfactant depletion; and (ii) oleic acid ARDS, favouring alveolar flooding by capillary leakage. METHODS: In 12 pigs [25 (1) kg], ARDS was randomly induced, either by saline lung lavage or oleic acid (OA) injection, and 3 animals served as controls. A respiratory breathhold manoeuvre without spontaneous breathing at different continuous positive airway pressure (CPAP) was applied in random order (CPAP levels of 5, 10, 15, 30, 35 and 50 cm H(2)O) and spiral-CT scans of the total lung were acquired at each CPAP level (slice thickness=1 mm). In each spiral-CT the volume of total lung parenchyma, tissue, gas, non-aerated, well-aerated, poorly aerated, and over-aerated lung was calculated. RESULTS: In both ARDS models non-aerated lung volume decreased significantly from CPAP 5 to CPAP 50 [oleic acid lung injury (OAI): 346.9 (80.1) to 96.4 (48.8) ml, P<0.001; lavage-ARDS: 245 17.6) to 42.7 (4.8) ml, P<0.001]. In lavage-ARDS poorly aerated lung volume decreased at higher CPAP levels [232 (45.2) at CPAP 10 to 84 (19.4) ml at CPAP 50, P<0.001] whereas in OAI poorly aerated lung volume did not vary at different airway pressures. CONCLUSIONS: In both ARDS models well-aerated and non-aerated lung volume respond to different CPAP levels in a comparable fashion: Thus, a cyclical alveolar collapse seems to be part of the derecruitment process also in the OA-ARDS. In OA-ARDS, the increase in poorly aerated lung volume reflects the specific initial lesion, that is capillary leakage with interstitial and alveolar oedema.
Resumo:
Fish behaviourists are increasingly turning to non-invasive measurement of steroid hormones in holding water, as opposed to blood plasma. When some of us met at a workshop in Faro, Portugal, in September, 2007, we realised that there were still many issues concerning the application of this procedure that needed resolution, including: Why do we measure release rates rather than just concentrations of steroids in the water? How does one interpret steroid release rates when dealing with fish of different sizes? What are the merits of measuring conjugated as well as free steroids in water? In the ‘static’ sampling procedure, where fish are placed in a separate container for a short period of time, does this affect steroid release—and, if so, how can it be minimised? After exposing a fish to a behavioural stimulus, when is the optimal time to sample? What is the minimum amount of validation when applying the procedure to a new species? The purpose of this review is to attempt to answer these questions and, in doing so, to emphasize that application of the non-invasive procedure requires more planning and validation than conventional plasma sampling. However, we consider that the rewards justify the extra effort.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^
Resumo:
Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.