886 resultados para Nested Model Structure
Resumo:
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Resumo:
Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.
Resumo:
The structure of near-tropopause potential vorticity (PV) acts as a primary control on the evolution of extratropical cyclones. Diabatic processes such as the latent heating found in ascending moist warm conveyor belts modify PV. A dipole in diabatically-generated PV (hereafter diabatic PV) straddling the extratropical tropopause, with the positive pole above the negative pole, was diagnosed in a recently published analysis of a simulated extratropical cyclone. This PV dipole has the potential to significantly modify the propagation of Rossby waves and the growth of baroclinically-unstable waves. This previous analysis was based on a single case study simulated with 12-km horizontal grid spacing and parameterized convection. Here, the dipole is investigated in three additional cold-season extratropical cyclones simulated in both convection-parameterizing and convection-permitting model configurations. A diabatic PV dipole across the extratropical tropopause is diagnosed in all three cases. The amplitude of the dipole saturates approximately 36 hours from the time diabatic PV is accumulated. The node elevation of the dipole varies between 2-4 PVU in the three cases, and the amplitude of the system-averaged dipole varies between 0.2-0.4 PVU. The amplitude of the negative pole is similar in the convection-parameterizing and convection-permitting simulations. The positive pole, which is generated by long-wave radiative cooling, is weak in the convection-permitting simulations due to the small domain size which limits the accumulation of diabatic tendencies within the interior of the domain. The possible correspondence between the diabatic PV dipole and the extratropical tropopause inversion layer is discussed.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
Many theories for the Madden-Julian oscillation (MJO) focus on diabatic processes, particularly the evolution of vertical heating and moistening. Poor MJO performance in weather and climate models is often blamed on biases in these processes and their interactions with the large-scale circulation. We introduce one of three components of a model-evaluation project, which aims to connect MJO fidelity in models to their representations of several physical processes, focusing on diabatic heating and moistening. This component consists of 20-day hindcasts, initialised daily during two MJO events in winter 2009-10. The 13 models exhibit a range of skill: several have accurate forecasts to 20 days' lead, while others perform similarly to statistical models (8-11 days). Models that maintain the observed MJO amplitude accurately predict propagation, but not vice versa. We find no link between hindcast fidelity and the precipitation-moisture relationship, in contrast to other recent studies. There is also no relationship between models' performance and the evolution of their diabatic-heating profiles with rain rate. A more robust association emerges between models' fidelity and net moistening: the highest-skill models show a clear transition from low-level moistening for light rainfall to mid-level moistening at moderate rainfall and upper-level moistening for heavy rainfall. The mid-level moistening, arising from both dynamics and physics, may be most important. Accurately representing many processes may be necessary, but not sufficient for capturing the MJO, which suggests that models fail to predict the MJO for a broad range of reasons and limits the possibility of finding a panacea.
Resumo:
An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
Resumo:
The Land surface Processes and eXchanges (LPX) model is a fire-enabled dynamic global vegetation model that performs well globally but has problems representing fire regimes and vegetative mix in savannas. Here we focus on improving the fire module. To improve the representation of ignitions, we introduced a reatment of lightning that allows the fraction of ground strikes to vary spatially and seasonally, realistically partitions strike distribution between wet and dry days, and varies the number of dry days with strikes. Fuel availability and moisture content were improved by implementing decomposition rates specific to individual plant functional types and litter classes, and litter drying rates driven by atmospheric water content. To improve water extraction by grasses, we use realistic plant-specific treatments of deep roots. To improve fire responses, we introduced adaptive bark thickness and post-fire resprouting for tropical and temperate broadleaf trees. All improvements are based on extensive analyses of relevant observational data sets. We test model performance for Australia, first evaluating parameterisations separately and then measuring overall behaviour against standard benchmarks. Changes to the lightning parameterisation produce a more realistic simulation of fires in southeastern and central Australia. Implementation of PFT-specific decomposition rates enhances performance in central Australia. Changes in fuel drying improve fire in northern Australia, while changes in rooting depth produce a more realistic simulation of fuel availability and structure in central and northern Australia. The introduction of adaptive bark thickness and resprouting produces more realistic fire regimes in Australian savannas. We also show that the model simulates biomass recovery rates consistent with observations from several different regions of the world characterised by resprouting vegetation. The new model (LPX-Mv1) produces an improved simulation of observed vegetation composition and mean annual burnt area, by 33 and 18% respectively compared to LPX.
Resumo:
A new coupled cloud physics–radiation parameterization of the bulk optical properties of ice clouds is presented. The parameterization is consistent with assumptions in the cloud physics scheme regarding particle size distributions (PSDs) and mass–dimensional relationships. The parameterization is based on a weighted ice crystal habit mixture model, and its bulk optical properties are parameterized as simple functions of wavelength and ice water content (IWC). This approach directly couples IWC to the bulk optical properties, negating the need for diagnosed variables, such as the ice crystal effective dimension. The parameterization is implemented into the Met Office Unified Model Global Atmosphere 5.0 (GA5) configuration. The GA5 configuration is used to simulate the annual 20-yr shortwave (SW) and longwave (LW) fluxes at the top of the atmosphere (TOA), as well as the temperature structure of the atmosphere, under various microphysical assumptions. The coupled parameterization is directly compared against the current operational radiation parameterization, while maintaining the same cloud physics assumptions. In this experiment, the impacts of the two parameterizations on the SW and LW radiative effects at TOA are also investigated and compared against observations. The 20-yr simulations are compared against the latest observations of the atmospheric temperature and radiative fluxes at TOA. The comparisons demonstrate that the choice of PSD and the assumed ice crystal shape distribution are as important as each other. Moreover, the consistent radiation parameterization removes a long-standing tropical troposphere cold temperature bias but slightly warms the southern midlatitudes by about 0.5 K.
Resumo:
The magnetoviscous effect, change in viscosity with change in magnetic field strength, and the anisotropy of magnetoviscous effect, change in viscosity with orientation of magnetic field, have been a focus of interest since four decades. A satisfactory understanding of the microscopic origin of anisotropy of magnetoviscous effect in magnetic fluids is still a matter of debate and a field of intense research. Here, we present an extensive simulation study to understand the relation between the anisotropy of magnetoviscous effect and the underlying change in micro-structures of ferrofluids. Our results indicate that field-induced chain-like structures respond very differently depending on their orientation relative to the direction of an externally applied shear flow, which leads to a pronounced anisotropy of viscosity. In this work, we focus on three exemplary values of dipolar interaction strengths which correspond to weak, intermediate and strong interactions between dipolar colloidal particles. We compare our simulation results with an experimental study on cobalt-based ferrofluids as well as with an existing theoretical model called the chain model. A non-monotonic behaviour in the anisotropy of magnetoviscous effect is observed with increasing dipolar interaction strength and is explained in terms of micro-structure formation.
Resumo:
Observations of the Sun’s corona during the space era have led to a picture of relatively constant, but cyclically varying solar output and structure. Longer-term, more indirect measurements, such as from 10Be, coupled by other albeit less reliable contemporaneous reports, however, suggest periods of significant departure from this standard. The Maunder Minimum was one such epoch where: (1) sunspots effectively disappeared for long intervals during a 70 yr period; (2) eclipse observations suggested the distinct lack of a visible K-corona but possible appearance of the F-corona; (3) reports of aurora were notably reduced; and (4) cosmic ray intensities at Earth were inferred to be substantially higher. Using a global thermodynamic MHD model, we have constructed a range of possible coronal configurations for the Maunder Minimum period and compared their predictions with these limited observational constraints. We conclude that the most likely state of the corona during—at least—the later portion of the Maunder Minimum was not merely that of the 2008/2009 solar minimum, as has been suggested recently, but rather a state devoid of any large-scale structure, driven by a photospheric field composed of only ephemeral regions, and likely substantially reduced in strength. Moreover, we suggest that the Sun evolved from a 2008/2009-like configuration at the start of the Maunder Minimum toward an ephemeral-only configuration by the end of it, supporting a prediction that we may be on the cusp of a new grand solar minimum.
Resumo:
Using a variation of the Nelson-Siegel term structure model we examine the sensitivity of real estate securities in six key global markets to unexpected changes in the level, slop and curvature of the yield curve. Our results confirm the time-sensitive nature of the exposure and sensitivity to interest rates and highlight the importance of considering the entire term structure of interest rates. One issue that is of particular of interest is that despite the 2007-9 financial crisis the importance of unanticipated interest rate risk weakens post 2003. Although the analysis does examine a range of markets the empirical analysis is unable to provide definitive evidence as to whether REIT and property-company markets display heightened or reduced exposure.
Resumo:
Customers will not continue to pay for a service if it is perceived to be of poor quality, and/or of no value. With a paradigm shift towards business dependence on service orientated IS solutions [1], it is critical that alignment exists between service definition, delivery, and customer expectation, businesses are to ensure customer satisfaction. Services, and micro-service development, offer businesses a flexible structure for solution innovation, however, constant changes in technology, business and societal expectations means an iterative analysis solution is required to i) determine whether provider services adequately meet customer segment needs and expectations, and ii) to help guide business service innovation and development. In this paper, by incorporating multiple models, we propose a series of steps to help identify and prioritise service gaps. Moreover, the authors propose the Dual Semiosis Analysis Model, i.e. a tool that highlights where within the symbiotic customer / provider semiosis process, requirements misinterpretation, and/or service provision deficiencies occur. This paper offers the reader a powerful customer-centric tool, designed to help business managers highlight both what services are critical to customer quality perception, and where future innovation
Resumo:
The intermetallic compound InPd (CsCl type of crystal structure with a broad compositional range) is considered as a candidate catalyst for the steam reforming of methanol. Single crystals of this phase have been grown to study the structure of its three low-index surfaces under ultra-high vacuum conditions, using low energy electron diffraction (LEED), X-ray photoemission spectroscopy (XPS), and scanning tunneling microscopy (STM). During surface preparation, preferential sputtering leads to a depletion of In within the top few layers for all three surfaces. The near-surface regions remain slightly Pd-rich until annealing to ∼580 K. A transition occurs between 580 and 660 K where In segregates towards the surface and the near-surface regions become slightly In-rich above ∼660 K. This transition is accompanied by a sharpening of LEED patterns and formation of flat step-terrace morphology, as observed by STM. Several superstructures have been identified for the different surfaces associated with this process. Annealing to higher temperatures (≥750 K) leads to faceting via thermal etching as shown for the (110) surface, with a bulk In composition close to the In-rich limit of the existence domain of the cubic phase. The Pd-rich InPd(111) is found to be consistent with a Pd-terminated bulk truncation model as shown by dynamical LEED analysis while, after annealing at higher temperature, the In-rich InPd(111) is consistent with an In-terminated bulk truncation, in agreement with density functional theory (DFT) calculations of the relative surface energies. More complex surface structures are observed for the (100) surface. Additionally, individual grains of a polycrystalline sample are characterized by micro-spot XPS and LEED as well as low-energy electron microscopy. Results from both individual grains and “global” measurements are interpreted based on comparison to our single crystals findings, DFT calculations and previous literature.
Resumo:
This study analyses the influence of vegetation structure (i.e. leaf area index and canopy cover) and seasonal background changes on moderate-resolution imaging spectrometer (MODIS)-simulated reflectance data in open woodland. Approximately monthly spectral reflectance and transmittance field measurements (May 2011 to October 2013) of cork oak tree leaves (Quercus suber) and of the herbaceous understorey were recorded in the region of Ribatejo, Portugal. The geometric-optical and radiative transfer (GORT) model was used to simulate MODIS response (red, near-infrared) and to calculate vegetation indices, investigating their response to changes in the structure of the overstorey vegetation and to seasonal changes in the understorey using scenarios corresponding to contrasting phenological status (dry season vs. wet season). The performance of normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), and enhanced vegetation index (EVI) is discussed. Results showed that SAVI and EVI were very sensitive to the emergence of background vegetation in the wet season compared to NDVI and that shading effects lead to an opposing trend in the vegetation indices. The information provided by this research can be useful to improve our understanding of the temporal dynamic of vegetation, monitored by vegetation indices.
Resumo:
A data insertion method, where a dispersion model is initialized from ash properties derived from a series of satellite observations, is used to model the 8 May 2010 Eyjafjallajökull volcanic ash cloud which extended from Iceland to northern Spain. We also briefly discuss the application of this method to the April 2010 phase of the Eyjafjallajökull eruption and the May 2011 Grímsvötn eruption. An advantage of this method is that very little knowledge about the eruption itself is required because some of the usual eruption source parameters are not used. The method may therefore be useful for remote volcanoes where good satellite observations of the erupted material are available, but little is known about the properties of the actual eruption. It does, however, have a number of limitations related to the quality and availability of the observations. We demonstrate that, using certain configurations, the data insertion method is able to capture the structure of a thin filament of ash extending over northern Spain that is not fully captured by other modeling methods. It also verifies well against the satellite observations according to the quantitative object-based quality metric, SAL—structure, amplitude, location, and the spatial coverage metric, Figure of Merit in Space.