995 resultados para BOX MODELS
Resumo:
There is an intimate interconnectivity between policy guidelines defining reform and the delineation of what research methods would be subsequently applied to determine reform success. Research is guided as much by the metaphors describing it as by the ensuing empirical definition of actions of results obtained from it. In a call for different reform policy metaphors Lumby and English (2010) note, “The primary responsibility for the parlous state of education... lies with the policy makers that have racked our schools with reductive and dehumanizing processes, following the metaphors of market efficiency, and leadership models based on accounting and the characteristics of machine bureaucracy” (p. 127)
Resumo:
Máster en Oceanografía
Resumo:
This paper describes a study of the theoretical and experimental behaviour of box-columns of varying b/t ratios under loadings of axial compression and torsion and their combinations. Details of the testing rigs and the testing methods, the results obtained such as the load-deflection curves and the interaction diagrams, and experimental observations regarding the behaviour of box-models and the types of local plastic mechanisms associated with each type of loading are presented. A simplified rigid-plastic analysis is carried out to study the collapse behaviour of box-columns under these loadings, based on the observed plastic mechanisms, and the results are compared with those of experiments.
Resumo:
Due to the complexity and inherent instability in polymer extrusion there is a need for process models which can be run on-line to optimise settings and control disturbances. First-principle models demand computationally intensive solution, while ‘black box’ models lack generalisation ability and physical process insight. This work examines a novel ‘grey box’ modelling technique which incorporates both prior physical knowledge and empirical data in generating intuitive models of the process. The models can be related to the underlying physical mechanisms in the extruder and have been shown to capture unpredictable effects of the operating conditions on process instability. Furthermore, model parameters can be related to material properties available from laboratory analysis and as such, lend themselves to re-tuning for different materials without extensive remodelling work.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
The Climate Change Adaptation for Natural Resource Management (NRM) in East Coast Australia Project aims to foster and support an effective “community of practice” for climate change adaptation within the East Coast Cluster NRM regions that will increase the capacity for adaptation to climate change through enhancements in knowledge and skills and through the establishment of long‐term collaborations. It is being delivered by six consortium research partners: * The University of Queensland (project lead) * Griffith University * University of the Sunshine Coast * CSIRO * New South Wales Office of Environment and Heritage * Queensland Department of Science, IT, Innovation and the Arts (Queensland Herbarium). The project relates to the East Coast Cluster, comprising the six coastal NRM regions and regional bodies between Rockhampton and Sydney: * Fitzroy Basin Association (FBA) * Burnett‐Mary Regional Group (BMRG) * SEQ Catchments (SEQC) * Northern Rivers Catchment Management Authority (CMA) (NRCMA) * Hunter‐Central Rivers CMA (HCRCMA) * Hawkesbury Nepean CMA (HNCMA). The aims of this report are to summarise the needs of the regional bodies in relation to NRM planning for climate change adaptation, and provide a basis for developing the detailed work plan for the research consortium. Two primary methods were used to identify the needs of the regional bodies: (1) document analysis of the existing NRM/ Catchment Action Plans (CAPs) and applications by the regional bodies for funding under Stream 1 of the Regional NRM Planning for Climate Change Fund, and; (2) a needs analysis workshop, held in May 2013 involving representatives from the research consortium partners and the regional bodies. The East Coast Cluster includes five of the ten largest significant urban areas in Australia, world heritage listed natural environments, significant agriculture, mining and extensive grazing. The three NSW CMAs have recently completed strategic level CAPs, with implementation plans to be finalised in 2014/2015. SEQC and FBA are beginning a review of their existing NRM Plans, to be completed in 2014 and 2015 respectively; while BMRG is aiming to produce a NRM and Climate Variability Action Strategy. The regional bodies will receive funding from the Australian Government through the Regional NRM Planning for Climate Change Fund (NRM Fund) to improve regional planning for climate change and help guide the location of carbon and biodiversity activities, including wildlife corridors. The bulk of the funding will be available for activities in 2013/2014, with smaller amounts available in subsequent years. Most regional bodies aim to have a large proportion of the planning work complete by the end of 2014. In addition, NSW CMAs are undergoing major structural change and will be incorporated into semi‐autonomous statutory Local Land Services bodies from 2014. Boundaries will align with local government boundaries and there will be significant change in staff and structures. The regional bodies in the cluster have a varying degree of climate knowledge. All plans recognise climate change as a key driver of change, but there are few specific actions or targets addressing climate change. Regional bodies also have varying capacity to analyse large volumes of spatial or modelling data. Due to the complex nature of natural resource management, all regional bodies work with key stakeholders (e.g. local government, industry groups, and community groups) to deliver NRM outcomes. Regional bodies therefore require project outputs that can be used directly in stakeholder engagement activities, and are likely to require some form of capacity building associated with each of the outputs to maximise uptake. Some of the immediate needs of the regional bodies are a summary of information or tools that are able to be used immediately; and a summary of the key outputs and milestone dates for the project, to facilitate alignment of planning activities with research outputs. A project framework is useful to show the linkages between research elements and the relevance of the research to the adaptive management cycle for NRM planning in which the regional bodies are engaged. A draft framework is proposed to stimulate and promote discussion on research elements and linkages; this will be refined during and following the development of the detailed project work plan. The regional bodies strongly emphasised the need to incorporate a shift to a systems based resilience approach to NRM planning, and that approach is included in the framework. The regional bodies identified that information on climate projections would be most useful at regional and subregional scale, to feed into scenario planning and impact analysis. Outputs should be ‘engagement ready’ and there is a need for capacity building to enable regional bodies to understand and use the projections in stakeholder engagement. There was interest in understanding the impacts of climate change projections on ecosystems (e.g. ecosystem shift), and the consequent impacts on the production of ecosystem services. It was emphasised that any modelling should be able to be used by the regional bodies with their stakeholders to allow for community input (i.e. no black box models). The online regrowth benefits tool was of great interest to the regional bodies, as spatial mapping of carbon farming opportunities would be relevant to their funding requirements. The NSW CMAs identified an interest in development of the tool for NSW vegetation types. Needs relating to socio‐economic information included understanding the socio‐economic determinants of carbon farming uptake and managing community expectations. A need was also identified to understand the vulnerability of industry groups as well as community to climate change impacts, and in particular understanding how changes in the flow of ecosystem services would interact with the vulnerability of these groups to impact on the linked ecologicalsocio‐economic system. Responses to disasters (particularly flooding and storm surge) and recovery responses were also identified as being of interest. An ecosystem services framework was highlighted as a useful approach to synthesising biophysical and socioeconomic information in the context of a systems based, resilience approach to NRM planning. A need was identified to develop processes to move towards such an approach to NRM planning from the current asset management approach. Examples of best practice in incorporating climate science into planning, using scenarios for stakeholder engagement in planning and processes for institutionalising learning were also identified as cross‐cutting needs. The over‐arching theme identified was the need for capacity building for the NRM bodies to best use the information available at any point in time. To this end a planners working group has been established to support the building of a network of informed and articulate NRM agents with knowledge of current climate science and capacity to use current tools to engage stakeholders in NRM planning for climate change adaptation. The planners working group would form the core group of the community of practice, with the broader group of stakeholders participating when activities aligned with their interests. In this way, it is anticipated that the Project will contribute to building capacity within the wider community to effectively plan for climate change adaptation.
Resumo:
Este trabalho foi feito com dados de campo da Bacia de Benguela, na costa sul de Angola. Nesta área aflora uma plataforma carbonática de idade albiana, análoga às formações produtoras de petróleo de mesma idade na Bacia de Campos. Duas unidades principais compõem a seção albiana na área: uma unidade inferior caracterizada por carbonatos de águas rasas com intercalações de conglomerados e arenitos na base, totalizando 300 m de espessura total, e uma unidade superior formada por sedimentos carbonáticos de águas mais profundas com espessura de 100 m. Esta seção se apresenta em dois domínios estruturais. O norte é caracterizado por uma plataforma de cerca de 5 km de largura, 35 km de comprimento na direção NE e que se encontra próxima ao embasamento. No domínio sul, essa plataforma se apresenta em fragmentos de menos de 10 km de comprimento por até 5 km de largura, alguns deles afastados do embasamento. Dado que há evaporitos sob essa plataforma, fez-se este trabalho com o objetivo de se avaliar a influência da tectônica salífera na estruturação das rochas albianas. A utilização de imagens de satélite de domínio público permitiu a extrapolação dos dados pontuais e a construção de perfis topográficos que possibilitaram a montagem das seções geológicas. Os dados de campo indicam que a estruturação na parte norte, onde há menos sal, é influenciada pelas estruturas locais do embasamento. Já na parte sul, onde a espessura de sal é maior, a deformação é descolada das estruturas do embasamento. Foi feita uma seção geológica em cada domínio e a restauração dessas duas seções mostrou que a unidade albiana na parte sul sofreu uma distensão de 1,25 (125%) e a de norte se distendeu em 1,05 (105%). Essa diferença foi causada por uma maior espessura original de sal na parte sul, que fez com que a cobertura albiana se distendesse mais em consequência da fluência do sal mergulho abaixo, em resposta ao basculamento da bacia em direção a offshore. A intensidade da deformação na parte sul foi tal que os blocos falhados perderam os contatos entre si, formando rafts. Este efeito foi reproduzido em modelos físicos, indicando que a variação na espessura original de sal tem grande influência na estruturação das rochas que o cobrem. As variações de espessura de sal são controladas por zonas de transferência do embasamento e estas mesmas estruturas controlam os limites dos blocos que sofrem soerguimento diferencial no Cenozoico.
Resumo:
Esta tese tem como foco principal a análise dos principais tipos de amplificação óptica e algumas de suas aplicações em sistemas de comunicação óptica. Para cada uma das tecnologias abordadas, procurou-se definir o estado da arte bem como identificar as oportunidades de desenvolvimento científico relacionadas. Os amplificadores para os quais foi dirigido alguma atenção neste documento foram os amplificadores em fibra dopada com Érbio (EDFA), os amplificadores a semicondutor (SOA) e os amplificadores de Raman (RA). Este trabalho iniciou-se com o estudo e análise dos EDFA’s. Dado o interesse científico e económico que estes amplificadores têm merecido, apenas poucos nichos de investigação estão ainda em aberto. Dentro destes, focá-mo-nos na análise de diferentes perfis de fibra óptica dopada de forma a conseguir uma optimização do desempenho dessas fibras como sistemas de amplificação. Considerando a fase anterior do trabalho como uma base de modelização para sistemas de amplificação com base em fibra e dopantes, evoluiu-se para amplificadores dopados mas em guias de onda (EDWA). Este tipo de amplificador tenta reduzir o volume físico destes dispositivos, mantendo as suas características principais. Para se ter uma forma de comparação de desempenho deste tipo de amplificador com os amplificadores em fibra, foram desenvolvidos modelos de caixa preta (BBM) e os seus parâmetros afinados por forma a termos uma boa modelização e posterior uso deste tipo de amplificiadores em setups de simulação mais complexos. Depois de modelizados e compreendidos os processo em amplificadores dopados, e com vista a adquirir uma visão global comparativa, foi imperativo passar pelo estudo dos processos de amplificação paramétrica de Raman. Esse tipo de amplificação, sendo inerente, ocorre em todas as bandas de propagação em fibra e é bastante flexível. Estes amplificadores foram inicialmente modelizados, e algumas de suas aplicações em redes passivas de acesso foram estudadas. Em especial uma série de requisitos, como por exemplo, a gama de comprimentos de onda sobre os quais existem amplificação e os altos débitos de perdas de inserção, nos levaram à investigação de um processo de amplificação que se ajustasse a eles, especialmente para buscar maiores capacidades de amplificação (nomeadamente longos alcances – superiores a 100 km – e altas razões de divisão – 1:512). Outro processo investigado foi a possibilidade de flexibilização dos parâmetros de comprimento de onda de ganho sem ter que mudar as caractísticas da bomba e se possível, mantendo toda a referenciação no transmissor. Este processo baseou-se na técnica de clamping de ganho já bastante estudada, mas com algumas modificações importantes, nomeadamente a nível do esquema (reflexão apenas num dos extremos) e da modelização do processo. O processo resultante foi inovador pelo recurso a espalhamentos de Rayleigh e Raman e o uso de um reflector de apenas um dos lados para obtenção de laser. Este processo foi modelizado através das equações de propagação e optimizado, tendo sido demonstrado experimentalmente e validado para diferentes tipos de fibras. Nesta linha, e dada a versatilidade do modelo desenvolvido, foi apresentada uma aplicação mais avançada para este tipo de amplificadores. Fazendo uso da sua resposta ultra rápida, foi proposto e analisado um regenerador 2R e analisada por simulação a sua gama de aplicação tendo em vista a sua aplicação sistémica. A parte final deste trabalho concentrou-se nos amplificadores a semiconductor (SOA). Para este tipo de amplificador, os esforços foram postos mais a nível de aplicação do que a nível de sua modelização. As aplicações principais para estes amplificadores foram baseadas em clamping óptico do ganho, visando a combinação de funções lógicas essenciais para a concepção de um latch óptico com base em componentes discretos. Assim, com base num chip de ganho, foi obtido uma porta lógica NOT, a qual foi caracterizada e demonstrada experimentalmente. Esta foi ainda introduzida num esquema de latching de forma a produzir um bi-estável totalmente óptico, o qual também foi demonstrado e caracterizado. Este trabalho é finalizado com uma conclusão geral relatando os subsistemas de amplificação e suas aplicacações.
Resumo:
A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.
Resumo:
Thermoaktive Bauteilsysteme sind Bauteile, die als Teil der Raumumschließungsflächen über ein integriertes Rohrsystem mit einem Heiz- oder Kühlmedium beaufschlagt werden können und so die Beheizung oder Kühlung des Raumes ermöglichen. Die Konstruktionenvielfalt reicht nach diesem Verständnis von Heiz, bzw. Kühldecken über Geschoßtrenndecken mit kern-integrierten Rohren bis hin zu den Fußbodenheizungen. Die darin enthaltenen extrem trägen Systeme werden bewußt eingesetzt, um Energieangebot und Raumenergiebedarf unter dem Aspekt der rationellen Energieanwendung zeitlich zu entkoppeln, z. B. aktive Bauteilkühlung in der Nacht, passive Raumkühlung über das kühle Bauteil am Tage. Gebäude- und Anlagenkonzepte, die träge reagierende thermoaktive Bauteilsysteme vorsehen, setzen im kompetenten und verantwortungsvollen Planungsprozeß den Einsatz moderner Gebäudesimulationswerkzeuge voraus, um fundierte Aussagen über Behaglichkeit und Energiebedarf treffen zu können. Die thermoaktiven Bauteilsysteme werden innerhalb dieser Werkzeuge durch Berechnungskomponenten repräsentiert, die auf mathematisch-physikalischen Modellen basieren und zur Lösung des bauteilimmanenten mehrdimensionalen instationären Wärmeleitungsproblems dienen. Bisher standen hierfür zwei unterschiedliche prinzipielle Vorgehensweisen zur Lösung zur Verfügung, die der physikalischen Modellbildung entstammen und Grenzen bzgl. abbildbarer Geometrie oder Rechengeschwindigkeit setzen. Die vorliegende Arbeit dokumentiert eine neue Herangehensweise, die als experimentelle Modellbildung bezeichnet wird. Über den Weg der Systemidentifikation können aus experimentell ermittelten Datenreihen die Parameter für ein kompaktes Black-Box-Modell bestimmt werden, das das Eingangs-Ausgangsverhalten des zugehörigen beliebig aufgebauten thermoaktiven Bauteils mit hinreichender Genauigkeit widergibt. Die Meßdatenreihen lassen sich über hochgenaue Berechnungen generieren, die auf Grund ihrer Detailtreue für den unmittelbaren Einsatz in der Gebäudesimulation ungeeignet wären. Die Anwendung der Systemidentifikation auf das zweidimensionale Wärmeleitungsproblem und der Nachweis ihrer Eignung wird an Hand von sechs sehr unterschiedlichen Aufbauten thermoaktiver Bauteilsysteme durchgeführt und bestätigt sehr geringe Temperatur- und Energiebilanzfehler. Vergleiche zwischen via Systemidentifikation ermittelten Black-Box-Modellen und physikalischen Modellen für zwei Fußbodenkonstruktionen zeigen, daß erstgenannte auch als Referenz für Genauigkeitsabschätzungen herangezogen werden können. Die Praktikabilität des neuen Modellierungsansatzes wird an Fallstudien demonstriert, die Ganzjahressimulationen unter Bauteil- und Betriebsvariationen an einem exemplarischen Büroraum betreffen. Dazu erfolgt die Integration des Black-Box-Modells in das kommerzielle Gebäude- und Anlagensimulationsprogramm CARNOT. Die akzeptablen Rechenzeiten für ein Einzonen-Gebäudemodell in Verbindung mit den hohen Genauigkeiten bescheinigen die Eignung der neuen Modellierungsweise.
Resumo:
In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.
Resumo:
The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.
Resumo:
During the last termination (from ~18 000 years ago to ~9000 years ago), the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a coupled climate-carbon model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependent diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 years ago, when the Antarctic ice sheet extent was at its maximum. In this scenario, we make the hypothesis that sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario, it is possible to simulate both the amplitude and timing of the long-term CO2 increase during the last termination in agreement with ice core data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.