960 resultados para Reservoir simulation. Steam injection. Injector well. Coupled
Resumo:
A stand-alone power system is an autonomous system that supplies electricity to the user load without being connected to the electric grid. This kind of decentralized system is frequently located in remote and inaccessible areas. It is essential for about one third of the world population which are living in developed or isolated regions and have no access to an electricity utility grid. The most people live in remote and rural areas, with low population density, lacking even the basic infrastructure. The utility grid extension to these locations is not a cost effective option and sometimes technically not feasible. The purpose of this thesis is the modelling and simulation of a stand-alone hybrid power system, referred to as “hydrogen Photovoltaic-Fuel Cell (PVFC) hybrid system”. It couples a photovoltaic generator (PV), an alkaline water electrolyser, a storage gas tank, a proton exchange membrane fuel cell (PEMFC), and power conditioning units (PCU) to give different system topologies. The system is intended to be an environmentally friendly solution since it tries maximising the use of a renewable energy source. Electricity is produced by a PV generator to meet the requirements of a user load. Whenever there is enough solar radiation, the user load can be powered totally by the PV electricity. During periods of low solar radiation, auxiliary electricity is required. An alkaline high pressure water electrolyser is powered by the excess energy from the PV generator to produce hydrogen and oxygen at a pressure of maximum 30bar. Gases are stored without compression for short- (hourly or daily) and long- (seasonal) term. A proton exchange membrane (PEM) fuel cell is used to keep the system’s reliability at the same level as for the conventional system while decreasing the environmental impact of the whole system. The PEM fuel cell consumes gases which are produced by an electrolyser to meet the user load demand when the PV generator energy is deficient, so that it works as an auxiliary generator. Power conditioning units are appropriate for the conversion and dispatch the energy between the components of the system. No batteries are used in this system since they represent the weakest when used in PV systems due to their need for sophisticated control and their short lifetime. The model library, ISET Alternative Power Library (ISET-APL), is designed by the Institute of Solar Energy supply Technology (ISET) and used for the simulation of the hybrid system. The physical, analytical and/or empirical equations of each component are programmed and implemented separately in this library for the simulation software program Simplorer by C++ language. The model parameters are derived from manufacturer’s performance data sheets or measurements obtained from literature. The identification and validation of the major hydrogen PVFC hybrid system component models are evaluated according to the measured data of the components, from the manufacturer’s data sheet or from actual system operation. Then, the overall system is simulated, at intervals of one hour each, by using solar radiation as the primary energy input and hydrogen as energy storage for one year operation. A comparison between different topologies, such as DC or AC coupled systems, is carried out on the basis of energy point of view at two locations with different geographical latitudes, in Kassel/Germany (Europe) and in Cairo/Egypt (North Africa). The main conclusion in this work is that the simulation method of the system study under different conditions could successfully be used to give good visualization and comparison between those topologies for the overall performance of the system. The operational performance of the system is not only depending on component efficiency but also on system design and consumption behaviour. The worst case of this system is the low efficiency of the storage subsystem made of the electrolyser, the gas storage tank, and the fuel cell as it is around 25-34% at Cairo and 29-37% at Kassel. Therefore, the research for this system should be concentrated in the subsystem components development especially the fuel cell.
Resumo:
Physikalische Grundlagenforschung und anwendungsorientierte physikalische Forschung auf den Gebieten nanoskaliger kristalliner und amorpher fester Körper haben in vielfacher Weise eine große Bedeutung. Neben dem Verständnis für die Struktur der Materie und die Wechselwirkung von Objekten von der Größe einiger Atome ist die Erkenntnis über die physikalischen Eigenschaften nanostrukturierter Systeme von hohem Interesse. Diese Forschung eröffnet die Möglichkeit, die mit der Mikroelektronik begonnene Miniaturisierung fortzusetzen und wird darüber hinaus neue Anwendungsfelder eröffnen. Das Erarbeiten der physikalischen Grundlagen der Methoden zur Herstellung und Strukturierung ist dabei zwingend notwendig, da hier Wirkungsprinzipien dominieren, die erst bei Strukturgrößen im Nanometerbereich auftreten oder hinreichend stark ausgeprägt sind. Insbesondere Halbleitermaterialien sind hier von großem Interesse. Die in dieser Arbeit untersuchten Resonatorstrukturen, die auf dem kristallinen Verbindungshalbleitermaterial GaInAsP/InP basieren, erschließen wichtige Anwendungsfelder im Bereich der optischen Datenübertragung sowie der optischen Sensorik. Hergestellt wird das Halbleitermaterial mit der Metallorganischen Gasphasenepitaxie. Die experimentell besimmten Kenngrößen lassen Rückschlüsse auf die Güte der Materialien, die quantenmechanischen Wirkungsprinzipien und die Bauelementcharakteristik zu und führen zu optimal angepassten Kristallstrukturen. Auf Basis dieser optimierten Materialien wurde ein durchstimmbarer Fabry-Perot-Filter hergestellt, der aus einer Kombination aus InP-Membranen und Luftspalten besteht und elektromechanisch aktuiert werden kann. Das GaInAsP dient hierbei als wenige hundert nm dicke Opferschicht, die ätztechnisch hochselektiv beseitigt wird. Die Qualität der Grenzflächen zum InP ist entscheidend für die Qualität der freigeätzten Kavitäten und damit für die mechanische Gesamtstabilität der Struktur. Der in dieser Arbeit beschriebene Filter hat eine Zentralwellenlänge im Bereich von 1550 nm und weist einen Durchstimmbereich von 221 nm auf. Erzielt wurde dieser Wert durch ein konsistentes Modell der wirkenden Verspannungskomponenten und einer optimierten epitaktischen Kontrolle der Verspannungsparameter. Das realisierte Filterbauelement ist vielversprechend für den Einsatz in der optischen Kommunikation im Bereich von WDM (wavelength division multiplexing) Anwendungen. Als weitere Resonatorstrukur wurde ein Asymmetrisch gekoppelter Quantenfilm als optisch aktives Medium, bestehend aus GaInAsP mit variierender Materialkomposition und Verspannung, untersucht, um sein Potential für eine breitbandige Emission zu untersuchen und mit bekannten Modellen zu vergleichen. Als Bauelementdesign wurde eine kantenemittierende Superlumineszenzleuchtdiode gewählt. Das Ergebnis ist eine Emissionskurve von 100 nm, die eine höhere Unabhängigkeit vom Injektionsstrom aufweist als andere bekannte Konzepte. Die quantenmechanischen Wirkungsprinzipien - im wesentlichen die Kopplung der beiden asymmetrischen Potentialtöpfe und die damit verbundene Kopplung der Wellenfunktionen - werden qualitativ diskutiert. Insgesamt bestätigt sich die Eignung des Materials GaInAsP auch für neuartige, qualitativ höchst anspruchsvolle Resonatorstrukturen und die Bedeutung der vorgestellten und untersuchten Resonatorkonzepte. Die vorgestellten Methoden, Materialien und Bauelemente liefern aufgrund ihrer Konzeption und der eingehenden experimentellen Untersuchungen einen Beitrag sowohl zu den zugrunde liegenden mechanischen, optoelektronischen und quantenmechanischen Wirkungsprinzipien der Strukturen, als auch zur Realisierung neuer optoelektronischer Bauelemente.
Resumo:
Im Rahmen dieser Arbeit werden Modellbildungsverfahren zur echtzeitfähigen Simulation wichtiger Schadstoffkomponenten im Abgasstrom von Verbrennungsmotoren vorgestellt. Es wird ein ganzheitlicher Entwicklungsablauf dargestellt, dessen einzelne Schritte, beginnend bei der Ver-suchsplanung über die Erstellung einer geeigneten Modellstruktur bis hin zur Modellvalidierung, detailliert beschrieben werden. Diese Methoden werden zur Nachbildung der dynamischen Emissi-onsverläufe relevanter Schadstoffe des Ottomotors angewendet. Die abgeleiteten Emissionsmodelle dienen zusammen mit einer Gesamtmotorsimulation zur Optimierung von Betriebstrategien in Hybridfahrzeugen. Im ersten Abschnitt der Arbeit wird eine systematische Vorgehensweise zur Planung und Erstellung von komplexen, dynamischen und echtzeitfähigen Modellstrukturen aufgezeigt. Es beginnt mit einer physikalisch motivierten Strukturierung, die eine geeignete Unterteilung eines Prozessmodells in einzelne überschaubare Elemente vorsieht. Diese Teilmodelle werden dann, jeweils ausgehend von einem möglichst einfachen nominalen Modellkern, schrittweise erweitert und ermöglichen zum Abschluss eine robuste Nachbildung auch komplexen, dynamischen Verhaltens bei hinreichender Genauigkeit. Da einige Teilmodelle als neuronale Netze realisiert werden, wurde eigens ein Verfah-ren zur sogenannten diskreten evidenten Interpolation (DEI) entwickelt, das beim Training einge-setzt, und bei minimaler Messdatenanzahl ein plausibles, also evidentes Verhalten experimenteller Modelle sicherstellen kann. Zum Abgleich der einzelnen Teilmodelle wurden statistische Versuchs-pläne erstellt, die sowohl mit klassischen DoE-Methoden als auch mittels einer iterativen Versuchs-planung (iDoE ) generiert wurden. Im zweiten Teil der Arbeit werden, nach Ermittlung der wichtigsten Einflussparameter, die Model-strukturen zur Nachbildung dynamischer Emissionsverläufe ausgewählter Abgaskomponenten vor-gestellt, wie unverbrannte Kohlenwasserstoffe (HC), Stickstoffmonoxid (NO) sowie Kohlenmono-xid (CO). Die vorgestellten Simulationsmodelle bilden die Schadstoffkonzentrationen eines Ver-brennungsmotors im Kaltstart sowie in der anschließenden Warmlaufphase in Echtzeit nach. Im Vergleich zur obligatorischen Nachbildung des stationären Verhaltens wird hier auch das dynami-sche Verhalten des Verbrennungsmotors in transienten Betriebsphasen ausreichend korrekt darge-stellt. Eine konsequente Anwendung der im ersten Teil der Arbeit vorgestellten Methodik erlaubt, trotz einer Vielzahl von Prozesseinflussgrößen, auch hier eine hohe Simulationsqualität und Ro-bustheit. Die Modelle der Schadstoffemissionen, eingebettet in das dynamische Gesamtmodell eines Ver-brennungsmotors, werden zur Ableitung einer optimalen Betriebsstrategie im Hybridfahrzeug ein-gesetzt. Zur Lösung solcher Optimierungsaufgaben bieten sich modellbasierte Verfahren in beson-derer Weise an, wobei insbesondere unter Verwendung dynamischer als auch kaltstartfähiger Mo-delle und der damit verbundenen Realitätsnähe eine hohe Ausgabequalität erreicht werden kann.
Resumo:
The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.
Resumo:
We investigate the effect of the epitaxial structure and the acceptor doping profile on the efficiency droop in InGaN/GaN LEDs by the physics based simulation of experimental internal quantum efficiency (IQE) characteristics. The device geometry is an integral part of our simulation approach. We demonstrate that even for single quantum well LEDs the droop depends critically on the acceptor doping profile. The Auger recombination was found to increase stronger than with the third power of the carrier density and has been found to dominate the droop in the roll over zone of the IQE. The fitted Auger coefficients are in the range of the values predicted by atomistic simulations.
Resumo:
Almost everyone sketches. People use sketches day in and day out in many different and heterogeneous fields, to share their thoughts and clarify ambiguous interpretations, for example. The media used to sketch varies from analog tools like flipcharts to digital tools like smartboards. Whereas analog tools are usually affected by insufficient editing capabilities like cut/copy/paste, digital tools greatly support these scenarios. Digital tools can be grouped into informal and formal tools. Informal tools can be understood as simple drawing environments, whereas formal tools offer sophisticated support to create, optimize and validate diagrams of a certain application domain. Most digital formal tools force users to stick to a concrete syntax and editing workflow, limiting the user’s creativity. For that reason, a lot of people first sketch their ideas using the flexibility of analog or digital informal tools. Subsequently, the sketch is "portrayed" in an appropriate digital formal tool. This work presents Scribble, a highly configurable and extensible sketching framework which allows to dynamically inject sketching features into existing graphical diagram editors, based on Eclipse GEF. This allows to combine the flexibility of informal tools with the power of formal tools without any effort. No additional code is required to augment a GEF editor with sophisticated sketching features. Scribble recognizes drawn elements as well as handwritten text and automatically generates the corresponding domain elements. A local training data library is created dynamically by incrementally learning shapes, drawn by the user. Training data can be shared with others using the WebScribble web application which has been created as part of this work.
Resumo:
This thesis presents a new actuator system consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the Parallel Coupled Micro-Macro Actuator, or PaCMMA. In this system, the micro-actuator is capable of high bandwidth force control due to its low mass and direct-drive connection to the output shaft. The compliant transmission of the macro-actuator reduces the impedance (stiffness) at the output shaft and increases the dynamic range of force. Performance improvement over single actuator systems was expected in force control, impedance control, force distortion and reduction of transient impact forces. A set of quantitative measures is proposed and the actuator system is evaluated against them: Force Control Bandwidth, Position Bandwidth, Dynamic Range, Impact Force, Impedance ("Backdriveability'"), Force Distortion and Force Performance Space. Several theoretical performance limits are derived from the saturation limits of the system. A control law is proposed and control system performance is compared to the theoretical limits. A prototype testbed was built using permanenent magnet motors and an experimental comparison was performed between this actuator concept and two single actuator systems. The following performance was observed: Force bandwidth of 56Hz, Torque Dynamic Range of 800:1, Peak Torque of 1040mNm, Minimum Torque of 1.3mNm. Peak Impact Force was reduced by an order of magnitude. Distortion at small amplitudes was reduced substantially. Backdriven impedance was reduced by 2-3 orders of magnitude. This actuator system shows promise for manipulator design as well as psychophysical tests of human performance.
Resumo:
La teoría de redes de Johanson y Mattson (1988) explica como las pequeñas empresas, también conocidas como PyMes, utilizan las redes de negocio para desarrollar sus procesos de internacionalización. Es así que a través de las redes pueden superar sus limitaciones de tamaño para encontrar cierto tipo de fluidez y dinamismo en su gestión, con el fin de aprovechar los beneficios de la internacionalización. A partir del desarrollo y fortalecimiento de las relaciones dentro de la red la organización puede posicionarse en una instancia competitiva cada vez más fuerte (Jarillo, 1988). Según Forsgren y Johanson (1992), para los gerentes es importante coordinar la interacción entre los diferentes actores de la red, ya que a través de estas su posición dentro de la red mejora y así mismo el flujo de recursos será mayor. El propósito de este trabajo es analizar el modelo de internacionalización según la teoría de redes, desde una perspectiva cultural, de e-Tech Simulation una PyME “Born to be global” norteamericana. Esta empresa ha minimizado su riesgo de internacionalización, a través del desarrollo de acuerdos entre los diferentes actores. Al mejorar su posición dentro de la red, es decir al fortalecer aún más los lazos existentes y crear nuevas relaciones, la empresa ha obtenido mayores beneficios de la misma y ha logrado ser aún más flexible con sus clientes. Es por esto que a partir de este análisis se planteó una serie de recomendaciones para mejorar los procesos de negociación dentro de la red, bajo un contexto cultural. De igual forma se evidencio la importancia del papel del emprendimiento del gerente en los procesos de internacionalización, así como su habilidad para mezclar los recursos obtenidos de diferentes mercados internacionales para satisfacer las necesidades de los clientes.
Resumo:
La aplicación de materiales compuestos de matriz polimérica reforzados mediante fibras largas (FRP, Fiber Reinforced Plastic), está en gradual crecimiento debido a las buenas propiedades específicas y a la flexibilidad en el diseño. Uno de los mayores consumidores es la industria aeroespacial, dado que la aplicación de estos materiales tiene claros beneficios económicos y medioambientales. Cuando los materiales compuestos se aplican en componentes estructurales, se inicia un programa de diseño donde se combinan ensayos reales y técnicas de análisis. El desarrollo de herramientas de análisis fiables que permiten comprender el comportamiento mecánico de la estructura, así como reemplazar muchos, pero no todos, los ensayos reales, es de claro interés. Susceptibilidad al daño debido a cargas de impacto fuera del plano es uno de los aspectos de más importancia que se tienen en cuenta durante el proceso de diseño de estructuras de material compuesto. La falta de conocimiento de los efectos del impacto en estas estructuras es un factor que limita el uso de estos materiales. Por lo tanto, el desarrollo de modelos de ensayo virtual mecánico para analizar la resistencia a impacto de una estructura es de gran interés, pero aún más, la predicción de la resistencia residual después del impacto. En este sentido, el presente trabajo abarca un amplio rango de análisis de eventos de impacto a baja velocidad en placas laminadas de material compuesto, monolíticas, planas, rectangulares, y con secuencias de apilamiento convencionales. Teniendo en cuenta que el principal objetivo del presente trabajo es la predicción de la resistencia residual a compresión, diferentes tareas se llevan a cabo para favorecer el adecuado análisis del problema. Los temas que se desarrollan son: la descripción analítica del impacto, el diseño y la realización de un plan de ensayos experimentales, la formulación e implementación de modelos constitutivos para la descripción del comportamiento del material, y el desarrollo de ensayos virtuales basados en modelos de elementos finitos en los que se usan los modelos constitutivos implementados.
Resumo:
The representation of the diurnal cycle in the Hadley Centre climate model is evaluated using simulations of the infrared radiances observed by Meteosat 7. In both the window and water vapour channels, the standard version of the model with 19 levels produces a good simulation of the geographical distributions of the mean radiances and of the amplitude of the diurnal cycle. Increasing the vertical resolution to 30 levels leads to further improvements in the mean fields. The timing of the maximum and minimum radiances reveals significant model errors, however, which are sensitive to the frequency with which the radiation scheme is called. In most regions, these errors are consistent with well documented errors in the timing of convective precipitation, which peaks before noon in the model, in contrast to the observed peak in the late afternoon or evening. When the radiation scheme is called every model time step (half an hour), as opposed to every three hours in the standard version, the timing of the minimum radiance is improved for convective regions over central Africa, due to the creation of upper-level layer-cloud by detrainment from the convection scheme, which persists well after the convection itself has dissipated. However, this produces a decoupling between the timing of the diurnal cycles of precipitation and window channel radiance. The possibility is raised that a similar decoupling may occur in reality and the implications of this for the retrieval of the diurnal cycle of precipitation from infrared radiances are discussed.
Resumo:
Monthly mean water vapour and clear-sky radiation extracted from the European Centre for Medium Range Weather Forecasts 40-year reanalysis (ERA40) forecasts are assessed using satellite observations and additional reanalysis data. There is a marked improvement in the interannual variability of column-integrated water vapour (CWV) over the oceans when using the 24-hour forecasts compared with the standard 6-hour forecasts products. The spatial distribution of CWV are well simulated by the 6-hour forecasts; using the 24-hour forecasts does not degrade this simulation substantially and in many cases improves on the quality. There is also an improved simulation of clear-sky radiation from the 24-hour forecasts compared with the 6-hour forecasts based on comparison with satellite observations and empirical estimates. Further work is required to assess the quality of water vapour simulation by reanalyses over land regions. Over the oceans, it is recommended that 24-hour forecasts of CWV and clear-sky radiation are used in preference to the standard 6-hour forecast products from ERA40
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Resumo:
The long-term stability, high accuracy, all-weather capability, high vertical resolution, and global coverage of Global Navigation Satellite System (GNSS) radio occultation (RO) suggests it as a promising tool for global monitoring of atmospheric temperature change. With the aim to investigate and quantify how well a GNSS RO observing system is able to detect climate trends, we are currently performing an (climate) observing system simulation experiment over the 25-year period 2001 to 2025, which involves quasi-realistic modeling of the neutral atmosphere and the ionosphere. We carried out two climate simulations with the general circulation model MAECHAM5 (Middle Atmosphere European Centre/Hamburg Model Version 5) of the MPI-M Hamburg, covering the period 2001–2025: One control run with natural variability only and one run also including anthropogenic forcings due to greenhouse gases, sulfate aerosols, and tropospheric ozone. On the basis of this, we perform quasi-realistic simulations of RO observables for a small GNSS receiver constellation (six satellites), state-of-the-art data processing for atmospheric profiles retrieval, and a statistical analysis of temperature trends in both the “observed” climatology and the “true” climatology. Here we describe the setup of the experiment and results from a test bed study conducted to obtain a basic set of realistic estimates of observational errors (instrument- and retrieval processing-related errors) and sampling errors (due to spatial-temporal undersampling). The test bed results, obtained for a typical summer season and compared to the climatic 2001–2025 trends from the MAECHAM5 simulation including anthropogenic forcing, were found encouraging for performing the full 25-year experiment. They indicated that observational and sampling errors (both contributing about 0.2 K) are consistent with recent estimates of these errors from real RO data and that they should be sufficiently small for monitoring expected temperature trends in the global atmosphere over the next 10 to 20 years in most regions of the upper troposphere and lower stratosphere (UTLS). Inspection of the MAECHAM5 trends in different RO-accessible atmospheric parameters (microwave refractivity and pressure/geopotential height in addition to temperature) indicates complementary climate change sensitivity in different regions of the UTLS so that optimized climate monitoring shall combine information from all climatic key variables retrievable from GNSS RO data.
Resumo:
Determining how El Niño and its impacts may change over the next 10 to 100 years remains a difficult scientific challenge. Ocean–atmosphere coupled general circulation models (CGCMs) are routinely used both to analyze El Niño mechanisms and teleconnections and to predict its evolution on a broad range of time scales, from seasonal to centennial. The ability to simulate El Niño as an emergent property of these models has largely improved over the last few years. Nevertheless, the diversity of model simulations of present-day El Niño indicates current limitations in our ability to model this climate phenomenon and to anticipate changes in its characteristics. A review of the several factors that contribute to this diversity, as well as potential means to improve the simulation of El Niño, is presented.
Resumo:
Despite its relevance to a wide range of technological and fundamental areas, a quantitative understanding of protein surface clustering dynamics is often lacking. In inorganic crystal growth, surface clustering of adatoms is well described by diffusion-aggregation models. In such models, the statistical properties of the aggregate arrays often reveal the molecular scale aggregation processes. We investigate the potential of these theories to reveal hitherto hidden facets of protein clustering by carrying out concomitant observations of lysozyme adsorption onto mica surfaces, using atomic force microscopy. and Monte Carlo simulations of cluster nucleation and growth. We find that lysozyme clusters diffuse across the substrate at a rate that varies inversely with size. This result suggests which molecular scale mechanisms are responsible for the mobility of the proteins on the substrate. In addition the surface diffusion coefficient of the monomer can also be extracted from the comparison between experiments and simulations. While concentrating on a model system of lysozyme-on-mica, this 'proof of concept' study successfully demonstrates the potential of our approach to understand and influence more biomedically applicable protein-substrate couples.