961 resultados para Response models
Resumo:
Comments This article is a U.S. government work, and is not subject to copyright in the United States. Abstract Potential consequences of climate change on crop production can be studied using mechanistic crop simulation models. While a broad variety of maize simulation models exist, it is not known whether different models diverge on grain yield responses to changes in climatic factors, or whether they agree in their general trends related to phenology, growth, and yield. With the goal of analyzing the sensitivity of simulated yields to changes in temperature and atmospheric carbon dioxide concentrations [CO2], we present the largest maize crop model intercomparison to date, including 23 different models. These models were evaluated for four locations representing a wide range of maize production conditions in the world: Lusignan (France), Ames (USA), Rio Verde (Brazil) and Morogoro (Tanzania). While individual models differed considerably in absolute yield simulation at the four sites, an ensemble of a minimum number of models was able to simulate absolute yields accurately at the four sites even with low data for calibration, thus suggesting that using an ensemble of models has merit. Temperature increase had strong negative influence on modeled yield response of roughly 0.5 Mg ha 1 per °C. Doubling [CO2] from 360 to 720 lmol mol 1 increased grain yield by 7.5% on average across models and the sites. That would therefore make temperature the main factor altering maize yields at the end of this century. Furthermore, there was a large uncertainty in the yield response to [CO2] among models. Model responses to temperature and [CO2] did not differ whether models were simulated with low calibration information or, simulated with high level of calibration information.
Resumo:
Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
En la presente tesis desarrollamos una estrategia para la simulación numérica del comportamiento mecánico de la aorta humana usando modelos de elementos finitos no lineales. Prestamos especial atención a tres aspectos claves relacionados con la biomecánica de los tejidos blandos. Primero, el análisis del comportamiento anisótropo característico de los tejidos blandos debido a las familias de fibras de colágeno. Segundo, el análisis del ablandamiento presentado por los vasos sanguíneos cuando estos soportan cargas fuera del rango de funcionamiento fisiológico. Y finalmente, la inclusión de las tensiones residuales en las simulaciones en concordancia con el experimento de apertura de ángulo. El análisis del daño se aborda mediante dos aproximaciones diferentes. En la primera aproximación se presenta una formulación de daño local con regularización. Esta formulación tiene dos ingredientes principales. Por una parte, usa los principios de la teoría de la fisura difusa para garantizar la objetividad de los resultados con diferentes mallas. Por otra parte, usa el modelo bidimensional de Hodge-Petruska para describir el comportamiento mesoscópico de los fibriles. Partiendo de este modelo mesoscópico, las propiedades macroscópicas de las fibras de colágeno son obtenidas a través de un proceso de homogenización. En la segunda aproximación se presenta un modelo de daño no-local enriquecido con el gradiente de la variable de daño. El modelo se construye a partir del enriquecimiento de la función de energía con un término que contiene el gradiente material de la variable de daño no-local. La inclusión de este término asegura una regularización implícita de la implementación por elementos finitos, dando lugar a resultados de las simulaciones que no dependen de la malla. La aplicabilidad de este último modelo a problemas de biomecánica se estudia por medio de una simulación de un procedimiento quirúrgico típico conocido como angioplastia de balón. In the present thesis we develop a framework for the numerical simulation of the mechanical behaviour of the human aorta using non-linear finite element models. Special attention is paid to three key aspects related to the biomechanics of soft tissues. First, the modelling of the characteristic anisotropic behaviour of the softue due to the collagen fibre families. Secondly, the modelling of damage-related softening that blood vessels exhibit when subjected to loads beyond their physiological range. And finally, the inclusion of the residual stresses in the simulations in accordance with the opening-angle experiment The modelling of damage is addressed with two major and different approaches. In the first approach a continuum local damage formulation with regularisation is presented. This formulation has two principal ingredients. On the one hand, it makes use of the principles of the smeared crack theory to avoid the mesh size dependence of the structural response in softening. On the other hand, it uses a Hodge-Petruska bidimensional model to describe the fibrils as staggered arrays of tropocollagen molecules, and from this mesoscopic model the macroscopic material properties of the collagen fibres are obtained using an homogenisation process. In the second approach a non-local gradient-enhanced damage formulation is introduced. The model is built around the enhancement of the free energy function by means of a term that contains the referential gradient of the non-local damage variable. The inclusion of this term ensures an implicit regularisation of the finite element implementation, yielding mesh-objective results of the simulations. The applicability of the later model to biomechanically-related problems is studied by means of the simulation of a typical surgical procedure, namely, the balloon angioplasty.
Resumo:
This study explored the utility of the impact response surface (IRS) approach for investigating model ensemble crop yield responses under a large range of changes in climate. IRSs of spring and winter wheat Triticum aestivum yields were constructed from a 26-member ensemble of process-based crop simulation models for sites in Finland, Germany and Spain across a latitudinal transect. The sensitivity of modelled yield to systematic increments of changes in temperature (-2 to +9°C) and precipitation (-50 to +50%) was tested by modifying values of baseline (1981 to 2010) daily weather, with CO2 concentration fixed at 360 ppm. The IRS approach offers an effective method of portraying model behaviour under changing climate as well as advantages for analysing, comparing and presenting results from multi-model ensemble simulations. Though individual model behaviour occasionally departed markedly from the average, ensemble median responses across sites and crop varieties indicated that yields decline with higher temperatures and decreased precipitation and increase with higher precipitation. Across the uncertainty ranges defined for the IRSs, yields were more sensitive to temperature than precipitation changes at the Finnish site while sensitivities were mixed at the German and Spanish sites. Precipitation effects diminished under higher temperature changes. While the bivariate and multi-model characteristics of the analysis impose some limits to interpretation, the IRS approach nonetheless provides additional insights into sensitivities to inter-model and inter-annual variability. Taken together, these sensitivities may help to pinpoint processes such as heat stress, vernalisation or drought effects requiring refinement in future model development.
Resumo:
In coming decades, global climate changes are expected to produce large shifts in vegetation distributions at unprecedented rates. These shifts are expected to be most rapid and extreme at ecotones, the boundaries between ecosystems, particularly those in semiarid landscapes. However, current models do not adequately provide for such rapid effects—particularly those caused by mortality—largely because of the lack of data from field studies. Here we report the most rapid landscape-scale shift of a woody ecotone ever documented: in northern New Mexico in the 1950s, the ecotone between semiarid ponderosa pine forest and piñon–juniper woodland shifted extensively (2 km or more) and rapidly (<5 years) through mortality of ponderosa pines in response to a severe drought. This shift has persisted for 40 years. Forest patches within the shift zone became much more fragmented, and soil erosion greatly accelerated. The rapidity and the complex dynamics of the persistent shift point to the need to represent more accurately these dynamics, especially the mortality factor, in assessments of the effects of climate change.
Resumo:
Inactivation of glycogen synthase kinase-3β (GSK3β) by S9 phosphorylation is implicated in mechanisms of neuronal survival. Phosphorylation of a distinct site, Y216, on GSK3β is necessary for its activity; however, whether this site can be regulated in cells is unknown. Therefore we examined the regulation of Y216 phosphorylation on GSK3β in models of neurodegeneration. Nerve growth factor withdrawal from differentiated PC12 cells and staurosporine treatment of SH-SY5Y cells led to increased phosphorylation at Y216, GSK3β activity, and cell death. Lithium and insulin, agents that lead to inhibition of GSK3β and adenoviral-mediated transduction of dominant negative GSK3β constructs, prevented cell death by the proapoptotic stimuli. Inhibitors induced S9 phosphorylation and inactivation of GSK3β but did not affect Y216 phosphorylation, suggesting that S9 phosphorylation is sufficient to override GSK3β activation by Y216 phosphorylation. Under the conditions examined, increased Y216 phosphorylation on GSK3β was not an autophosphorylation response. In resting cells, Y216 phosphorylation was restricted to GSK3β present at focal adhesion sites. However, after staurosporine, a dramatic alteration in the immunolocalization pattern was observed, and Y216-phosphorylated GSK3β selectively increased within the nucleus. In rats, Y216 phosphorylation was increased in degenerating cortical neurons induced by ischemia. Taken together, these results suggest that Y216 phosphorylation of GSK3β represents an important mechanism by which cellular insults can lead to neuronal death.
Resumo:
Acute promyelocytic leukemia (APL) is associated with chromosomal translocations always involving the RARα gene, which variably fuses to one of several distinct loci, including PML or PLZF (X genes) in t(15;17) or t(11;17), respectively. APL in patients harboring t(15;17) responds well to retinoic acid (RA) treatment and chemotherapy, whereas t(11;17) APL responds poorly to both treatments, thus defining a distinct syndrome. Here, we show that RA, As2O3, and RA + As2O3 prolonged survival in either leukemic PML-RARα transgenic mice or nude mice transplanted with PML-RARα leukemic cells. RA + As2O3 prolonged survival compared with treatment with either drug alone. In contrast, neither in PLZF-RARα transgenic mice nor in nude mice transplanted with PLZF-RARα cells did any of the three regimens induce complete disease remission. Unexpectedly, therapeutic doses of RA and RA + As2O3 can induce, both in vivo and in vitro, the degradation of either PML-RARα or PLZF-RARα proteins, suggesting that the maintenance of the leukemic phenotype depends on the continuous presence of the former, but not the latter. Our findings lead to three major conclusions with relevant therapeutic implications: (i) the X-RARα oncoprotein directly determines response to treatment and plays a distinct role in the maintenance of the malignant phenotype; (ii) As2O3 and/or As2O3 + RA combination may be beneficial for the treatment of t(15;17) APL but not for t(11;17) APL; and (iii) therapeutic strategies aimed solely at degrading the X-RARα oncoprotein may not be effective in t(11;17) APL.
Resumo:
The idiotype of the Ig expressed by a B-cell malignancy (Id) can serve as a unique tumor-specific antigen and as a model for cancer vaccine development. In murine models of Id vaccination, formulation of syngeneic Id with carrier proteins or adjuvants induces an anti-idiotypic antibody response. However, inducing a potent cell-mediated response to this weak antigen instead would be highly desirable. In the 38C13 lymphoma model, we observed that low doses of free granulocyte/macrophage colony-stimulating factor (GM-CSF) 10,000 units i.p. or locally s.c. daily for 4 days significantly enhanced protective antitumor immunity induced by s.c. Id-keyhole limpet hemocyanin (KLH) immunization. This effect was critically dependent upon effector CD4+ and CD8+ T cells and was not associated with any increased anti-idiotypic antibody production. Lymphocytes from spleens and draining lymph nodes of mice primed with Id-KLH plus GM-CSF, but not with Id-KLH alone, demonstrated significant proliferation to Id in vitro without any biased production of interferon gamma or interleukin 4 protein or mRNA. As a further demonstration of potency, 50% of mice immunized with Id-KLH plus GM-CSF on the same day as challenge with a large s.c. tumor inoculum remained tumor-free at day 80, compared with 17% for Id-KLH alone, when immunization was combined with cyclophosphamide. Taken together, these results demonstrate that GM-CSF can significantly enhance the immunogenicity of a defined self-antigen and that this effect is mediated exclusively by activating the T-cell arm of the immune response.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.
Resumo:
The Atlantic thermohaline circulation (THC) is an important part of the earth's climate system. Previous research has shown large uncertainties in simulating future changes in this critical system. The simulated THC response to idealized freshwater perturbations and the associated climate changes have been intercompared as an activity of World Climate Research Program (WCRP) Coupled Model Intercomparison Project/Paleo-Modeling Intercomparison Project (CMIP/PMIP) committees. This intercomparison among models ranging from the earth system models of intermediate complexity (EMICs) to the fully coupled atmosphere-ocean general circulation models (AOGCMs) seeks to document and improve understanding of the causes of the wide variations in the modeled THC response. The robustness of particular simulation features has been evaluated across the model results. In response to 0.1-Sv (1 Sv equivalent to 10^6 ms^3 s^-1) freshwater input in the northern North Atlantic, the multimodel ensemble mean THC weakens by 30% after 100 yr. All models simulate sonic weakening of the THC, but no model simulates a complete shutdown of the THC. The multimodel ensemble indicates that the surface air temperature could present a complex anomaly pattern with cooling south of Greenland and warming over the Barents and Nordic Seas. The Atlantic ITCZ tends to shift southward. In response to 1.0-Sv freshwater input, the THC switches off rapidly in all model simulations. A large cooling occurs over the North Atlantic. The annual mean Atlantic ITCZ moves into the Southern Hemisphere. Models disagree in terms of the reversibility of the THC after its shutdown. In general, the EMICs and AOGCMs obtain similar THC responses and climate changes with more pronounced and sharper patterns in the AOGCMs.
Resumo:
As part of the Coupled Model Intercomparison Project, integrations with a common design have been undertaken with eleven different climate models to compare the response of the Atlantic thermohaline circulation ( THC) to time-dependent climate change caused by increasing atmospheric CO2 concentration. Over 140 years, during which the CO2 concentration quadruples, the circulation strength declines gradually in all models, by between 10 and 50%. No model shows a rapid or complete collapse, despite the fairly rapid increase and high final concentration of CO2. The models having the strongest overturning in the control climate tend to show the largest THC reductions. In all models, the THC weakening is caused more by changes in surface heat flux than by changes in surface water flux. No model shows a cooling anywhere, because the greenhouse warming is dominant.
Resumo:
Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (high-top') and models that do not (low-top'). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (December-March) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño/Southern Oscillation (ENSO) and the Quasi-Biennial Oscillation (QBO)) and how they relate to predictive skill on intraseasonal to seasonal time-scales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.
Resumo:
This study analyzes the degree of competition through individual actions and reactions. Empirical support for this analysis has derived mainly from structural econometric models describing the nature of competition. This analysis extends the existing literature by empirically considering a direct measurement of competition through the analysis of the competitive actions and responses, and describing how firms compete within and between strategic groups. We estimate the firms’ conduct in the Spanish deposits market with 146 firms and 18,888 observations. This is a specially compelling context for the banking industry, in which a deregulation process gives rise to the adoption of aggressive strategies seeking to increase the market shares of deposit accounts; thus, producing a turbulent situation of increasing rivalry. Our results offer a deeper understanding of the firms’ competitive behavior, since we identify different patterns of actions and reactions depending upon the strategic group the firm belongs to.