940 resultados para global nonhydrostatic model
Resumo:
In many problems in spatial statistics it is necessary to infer a global problem solution by combining local models. A principled approach to this problem is to develop a global probabilistic model for the relationships between local variables and to use this as the prior in a Bayesian inference procedure. We show how a Gaussian process with hyper-parameters estimated from Numerical Weather Prediction Models yields meteorologically convincing wind fields. We use neural networks to make local estimates of wind vector probabilities. The resulting inference problem cannot be solved analytically, but Markov Chain Monte Carlo methods allow us to retrieve accurate wind fields.
Resumo:
In many problems in spatial statistics it is necessary to infer a global problem solution by combining local models. A principled approach to this problem is to develop a global probabilistic model for the relationships between local variables and to use this as the prior in a Bayesian inference procedure. We show how a Gaussian process with hyper-parameters estimated from Numerical Weather Prediction Models yields meteorologically convincing wind fields. We use neural networks to make local estimates of wind vector probabilities. The resulting inference problem cannot be solved analytically, but Markov Chain Monte Carlo methods allow us to retrieve accurate wind fields.
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.
Resumo:
Several levels of complexity are available for modelling of wastewater treatment plants. Modelling local effects rely on computational fluid dynamics (CFD) approaches whereas activated sludge models (ASM) represent the global methodology. By applying both modelling approaches to pilot plant and full scale systems, this paper evaluates the value of each method and especially their potential combination. Model structure identification for ASM is discussed based on a full-scale closed loop oxidation ditch modelling. It is illustrated how and for what circumstances information obtained via CFD (computational fluid dynamics) analysis, residence time distribution (RTD) and other experimental means can be used. Furthermore, CFD analysis of the multiphase flow mechanisms is employed to obtain a correct description of the oxygenation capacity of the system studied, including an easy implementation of this information in the classical ASM modelling (e.g. oxygen transfer). The combination of CFD and activated sludge modelling of wastewater treatment processes is applied to three reactor configurations, a perfectly mixed reactor, a pilot scale activated sludge basin (ASB) and a real scale ASB. The application of the biological models to the CFD model is validated against experimentation for the pilot scale ASB and against a classical global ASM model response. A first step in the evaluation of the potential of the combined CFD-ASM model is performed using a full scale oxidation ditch system as testing scenario.
Resumo:
A Transzatlanti kereskedelmi és beruházási társulással (TTIP) kapcsolatosan számos vita, kampány, elemzés próbálja meggyőzni vagy elriasztani a közvéleményt. E tanulmányban a szerző azt tárja fel, hogy milyen közgazdasági matematikai modelleken alapulnak a TTIP-vel kapcsolatos gazdasági hatásokról szóló becslések. Áttekinti a számszerűsített általános egyensúlyi modellek (CGE) alkalmazási lehetőségét, közgazdasági tartalmát és hogy hogyan adaptálták mindezt a TTIP növekedési hatásaira fókuszáló elemzések. Kritikai szemle alá veszi a CGE modell korlátait. Alternatív megközelítésként értékeli a Global Policy Model alkalmazására tett kísérletet. Összeveti és értékeli a CGE szemléletben készült különböző TTIP elemzéseket
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
This data sets contains LPJ-LMfire dynamic global vegetation model output covering Europe and the Mediterranean for the Last Glacial Maximum (LGM; 21 ka) and for a preindustrial control simulation (20th century detrended climate). The netCDF data files are time averages of the final 30 years of the model simulation. Each netCDF file contains four or five variables: fractional cover of 9 plant functional types (PFTs; cover), total fractional coverage of trees (treecover), population density of hunter-gatherers (foragerPD; only for the "people" simulations), fraction of the gridcell burned on 30-year average (burnedf), and vegetation net primary productivity (NPP). The model spatial resolution is 0.5-degrees For the LGM simulations, LPJ-LMfire was driven by the PMIP3 suite of eight GCMs for which LGM climate simulations were available. Also provided in this archive is the result of an LPJ-LMfire run that was forced by the average climate of all GCMs (the "GCM-mean" files), and the average of each of the individual LPJ-LMfire runs over the eight LGM scenarios individually (the "LPJ-mean" files). The model simulations are provided that include the influence of human presence on the landscape (the "people" files), and in a "world without humans" scenario (the "natural" files). Finally this archive contains the preindustrial reference simulation with and without human influence ("PI_reference_people" and "PI_reference_nat", respectively). There are therefore 22 netCDF files in this archive: 8 each of LGM simulations with and without people (total 16) and the "GCM mean" simulation (2 files) and the "LPJ mean" aggregate (2 files), and finally the two preindustrial "control" simulations ("PI"), with and without humans (2 files). In addition to the LPJ-LMfire model output (netCDF files), this archive also contains a table of arboreal pollen percent calculated from pollen samples dated to the LGM at sites throughout (lgmAP.txt), and a table containing the location of archaeological sites dated to the LGM (LGM_archaeological_site_locations.txt).
Resumo:
A high-resolution paleomagnetic and rock magnetic study has been carried out on sediment cores collected in glaciomarine silty-clay sequences from the continental shelf and slope of the southern Storfjorden trough-mouth fan, on the northwestern Barents Sea continental margin. The Storfjorden sedimentary system was investigated during the SVAIS and EGLACOM cruises, when 10 gravity cores, with a variable length from 1.03 m to 6.41 m, were retrieved. Accelerator mass spectrometry (AMS) 14C analyses on 24 samples indicate that the cores span a time interval that includes the Holocene, the last deglaciation phase and in some cores the last glacial maximum. The sediments carry a well-defined characteristic remanent magnetization and have a valuable potential to reconstruct the paleosecular variation (PSV) of the geomagnetic field, including relative paleointensity (RPI) variations. The paleomagnetic data allow reconstruction of past dynamics and amplitude of the geomagnetic field variations at high northern latitudes (75°-76° N). At the same time, the rock magnetic and paleomagnetic data allow a high-resolution correlation of the sedimentary sequences and a refinement of their preliminary age models. The Holocene PSV and RPI records appear particularly sound, since they are consistent between cores and they can be correlated to the closest regional stacking curves (UK PSV, FENNOSTACK and FENNORPIS) and global geomagnetic model for the last 7 ka (CALS7k.2). The computed amplitude of secular variation is lower than that outlined by some geomagnetic field models, suggesting that it has been almost independent from latitude during the Holocene.
Resumo:
The oceanic carbon cycle mainly comprises the production and dissolution/ preservation of carbonate particles in the water column or within the sediment. Carbon dioxide is one of the major controlling factors for the production and dissolution of carbonate. There is a steady exchange between the ocean and atmosphere in order to achieve an equilibrium of CO2; an anthropogenic rise of CO2 in the atmosphere would therefore also increase the amount of CO2 in the ocean. The increased amount of CO2 in the ocean, due to increasing CO2-emissions into the atmosphere since the industrial revolution, has been interpreted as "ocean acidification" (Caldeira and Wickett, 2003). Its alarming effects, such as dissolution and reduced CaCO3 formation, on reefs and other carbonate shell producing organisms form the topic of current discussions (Kolbert, 2006). Decreasing temperatures and increasing pressure and CO2 enhance the dissolution of carbonate particles at the sediment-water interface in the deep sea. Moreover, dissolution processes are dependent of the saturation state of the surrounding water with respect to calcite or aragonite. Significantly increased dissolution has been observed below the aragonite or calcite chemical lysocline; below the aragonite compensation depth (ACD), or calcite compensation depth (CCD), all aragonite or calcite particles, respectively, are dissolved. Aragonite, which is more prone to dissolution than calcite, features a shallower lysocline and compensation depth than calcite. In the 1980's it was suggested that significant dissolution also occurs in the water column or at the sediment-water interface above the lysocline. Unknown quantities of carbonate produced at the sea surface, would be dissolved due to this process. This would affect the calculation of the carbonate production and the entire carbonate budget of the world's ocean. Following this assumption, a number of studies have been carried out to monitor supralysoclinal dissolution at various locations: at Ceara Rise in the western equatorial Atlantic (Martin and Sayles, 1996), in the Arabian Sea (Milliman et al., 1999), in the equatorial Indian Ocean (Peterson and Prell, 1985; Schulte and Bard, 2003), and in the equatorial Pacific (Kimoto et al., 2003). Despite the evidence for supralysoclinal dissolution in some areas of the world's ocean, the question still exists whether dissolution occurs above the lysocline in the entire ocean. The first part of this thesis seeks answers to this question, based on the global budget model of Milliman et al. (1999). As study area the Bahamas and Florida Straits are most suitable because of the high production of carbonate, and because there the depth of the lysocline is the deepest worldwide. To monitor the occurrence of supralysoclinal dissolution, the preservation of aragonitic pteropod shells was determined, using the Limacina inflata Dissolution Index (LDX; Gerhardt and Henrich, 2001). Analyses of the grain-size distribution, the mineralogy, and the foraminifera assemblage revealed further aspects concerning the preservation state of the sediment. All samples located at the Bahamian platform are well preserved. In contrast, the samples from the Florida Straits show dissolution in 800 to 1000 m and below 1500 m water depth. Degradation of organic material and the subsequent release of CO2 probably causes supralysoclinal dissolution. A northward extension of the corrosive Antarctic Intermediate Water (AAIW) flows through the Caribbean Sea into the Gulf of Mexico and might enhance dissolution processes at around 1000 m water depth. The second part of this study deals with the preservation of Pliocene to Holocene carbonate sediments from both the windward and leeward basins adjacent to Great Bahama Bank (Ocean Drilling Program Sites 632, 633, and 1006). Detailed census counts of the sand fraction (250-500 µm) show the general composition of the coarse grained sediment. Further methods used to examine the preservation state of carbonates include the amount of organic carbon and various dissolution indices, such as the LDX and the Fragmentation Index. Carbonate concretions (nodules) have been observed in the sand fraction. They are similar to the concretions or aggregates previously mentioned by Mullins et al. (1980a) and Droxler et al. (1988a), respectively. Nonetheless, a detailed study of such grains has not been made to date, although they form an important part of periplatform sediments. Stable isotopemeasurements of the nodules' matrix confirm previous suggestions that the nodules have formed in situ as a result of early diagenetic processes (Mullins et al., 1980a). The two cores, which are located in Exuma Sound (Sites 632 and 633), at the eastern margin of Great Bahama Bank (GBB), show an increasing amount of nodules with increasing core depth. In Pliocene sediments, the amount of nodules might rise up to 100%. In contrast, nodules only occur within glacial stages in the deeper part of the studied core interval (between 30 and 70 mbsf) at Site 1006 on the western margin of GBB. Above this level the sediment is constantly being flushed by bottom water, that might also contain corrosive AAIW, which would hinder cementation. Fine carbonate particles (<63 µm) form the matrix of the nodules and do therefore not contribute to the fine fraction. At the same time, the amount of the coarse fraction (>63 µm) increases due to the nodule formation. The formation of nodules might therefore significantly alter the grain-size distribution of the sediment. A direct comparison of the amount of nodules with the grain-size distribution shows that core intervals with high amounts of nodules are indeed coarser than the intervals with low amounts of nodules. On the other hand, an initially coarser sediment might facilitate the formation of nodules, as a high porosity and permeability enhances early diagenetic processes (Westphal et al., 1999). This suggestion was also confirmed: the glacial intervals at Site 1006 are interpreted to have already been rather coarse prior to the formation of nodules. This assumption is based on the grain-size distribution in the upper part of the core, which is not yet affected by diagenesis, but also shows coarser sediment during the glacial stages. As expected, the coarser, glacial deposits in the lower part of the core show the highest amounts of nodules. The same effect was observed at Site 632, where turbidites cause distinct coarse layers and reveal higher amounts of nodules than non-turbiditic sequences. Site 633 shows a different pattern: both the amount of nodules and the coarseness of the sediment steadily increase with increasing core depth. Based on these sedimentological findings, the following model has been developed: a grain-size pattern characterised by prominent coarse peaks (as observed at Sites 632 and 1006) is barely altered. The greatest coarsening effect due to the nodule formation will occur in those layers, which have initially been coarser than the adjacent sediment intervals. In this case, the overall trend of the grain-size pattern before and after formation of the nodules is similar to each other. Although the sediment is altered due to diagenetic processes, grain size could be used as a proxy for e.g. changes in the bottom-water current. The other case described in the model is based on a consistent initial grain-size distribution, as observed at Site 633. In this case, the nodule reflects the increasing diagenetic alteration with increasing core depth rather than the initial grain-size pattern. In the latter scenario, the overall grain-size trend is significantly changed which makes grain size unreliable as a proxy for any palaeoenvironmental changes. The results of this study contribute to the understanding of general sedimentation processes in the periplatform realm: the preservation state of surface samples shows the influence of supralysoclinal dissolution due to the degradation of organic matter and due to the presence of corrosive water masses; the composition of the sand fraction shows the alteration of the carbonate sediment due to early diagenetic processes. However, open questions are how and when the alteration processes occur and how geochemical parameters, such as the rise in alkalinity or the amount of strontium, are linked to them. These geochemical parameters might reveal more information about the depth in the sediment column, where dissolution and cementation processes occur.
Resumo:
This data set contains the inputs and the results of the REDD+ Policy Assessment Centre project (REDD-PAC) project (http://www.redd-pac.org), developed by a consortium of research institutes (IIASA, INPE, IPEA, UNEP-WCMC), supported by Germany's International Climate Initiative. Taking a new land use map of Brazil for 2000 as input, the research team used the global economic model GLOBIOM to project land use changes in Brazil up to 2050. Model projections show that Brazil has the potential to balance its goals of protecting the environment and becoming a major global producer of food and biofuels. The model results were taken into account by Brazilian decision-makers when developing the country's intended nationally determined contribution (INDC).
Resumo:
España tiene la oportunidad de desempeñar un papel importante en el proceso de internacionalización de las empresas chinas hacia Europa y América Latina gracias a la histórica experiencia de las multinacionales españolas en estas regiones. Puesto que las relaciones diplomáticas entre España y China gozan de buena sintonía, si España aprovecha la posición de privilegio respecto al resto de economías europeas, el gigante asiático podría estar interesado en el apoyo de España en América Latina, propiciando así la implantación de iniciativas de cooperación entre los tres polos. Los puntos clave para construir una relación win-win en clave triangular es la importancia de un socio local para profundizar en su despliegue internacional en América Latina, que las empresas chinas requieren, además del expertise y know-how necesarios en los procesos operativos, siendo en estos ámbitos las empresas españolas las mejor posicionadas.
Resumo:
Tropospheric ozone (O3) adversely affects human health, reduces crop yields, and contributes to climate forcing. To limit these effects, the processes controlling O3 abundance as well as that of its precursor molecules must be fully characterized. Here, I examine three facets of O3 production, both in heavily polluted and remote environments. First, using in situ observations from the DISCOVER-AQ field campaign in the Baltimore/Washington region, I evaluate the emissions of the O3 precursors CO and NOx (NOx = NO + NO2) in the National Emissions Inventory (NEI). I find that CO/NOx emissions ratios derived from observations are 21% higher than those predicted by the NEI. Comparisons to output from the CMAQ model suggest that CO in the NEI is accurate within 15 ± 11%, while NOx emissions are overestimated by 51-70%, likely due to errors in mobile sources. These results imply that ambient ozone concentrations will respond more efficiently to NOx controls than current models suggest. I then investigate the source of high O3 and low H2O structures in the Tropical Western Pacific (TWP). A combination of in situ observations, satellite data, and models show that the high O3 results from photochemical production in biomass burning plumes from fires in tropical Southeast Asia and Central Africa; the low relative humidity results from large-scale descent in the tropics. Because these structures have frequently been attributed to mid-latitude pollution, biomass burning in the tropics likely contributes more to the radiative forcing of climate than previously believed. Finally, I evaluate the processes controlling formaldehyde (HCHO) in the TWP. Convective transport of near surface HCHO leads to a 33% increase in upper tropospheric HCHO mixing ratios; convection also likely increases upper tropospheric CH3OOH to ~230 pptv, enough to maintain background HCHO at ~75 pptv. The long-range transport of polluted air, with NO four times the convectively controlled background, intensifies the conversion of HO2 to OH, increasing OH by a factor of 1.4. Comparisons between the global chemistry model CAM-Chem and observations show that consistent underestimates of HCHO by CAM-Chem throughout the troposphere result from underestimates in both NO and acetaldehyde.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.